Test Report: Docker_Linux_containerd 21642

                    
                      14b81faeac061460adc41f1c17794999a5c5cccd:2025-09-26:41636
                    
                

Test fail (12/331)

x
+
TestDockerEnvContainerd (36.75s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-288409 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-288409 --driver=docker  --container-runtime=containerd: (18.815794674s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-288409"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXDK1ail/agent.38490" SSH_AGENT_PID="38491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXDK1ail/agent.38490" SSH_AGENT_PID="38491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXDK1ail/agent.38490" SSH_AGENT_PID="38491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (2.219809288s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXDK1ail/agent.38490" SSH_AGENT_PID="38491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-09-26 22:34:43.177538522 +0000 UTC m=+347.432745586
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-288409
helpers_test.go:243: (dbg) docker inspect dockerenv-288409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf",
	        "Created": "2025-09-26T22:34:15.150917676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:34:15.198218083Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf/hosts",
	        "LogPath": "/var/lib/docker/containers/d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf/d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf-json.log",
	        "Name": "/dockerenv-288409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-288409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-288409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75fefaf209eb8650388af8dc066e1afa27073d3e65c71ec3c8c8b2f934026cf",
	                "LowerDir": "/var/lib/docker/overlay2/42b73ec2b02d64e633f0aa4ea3ea8d4521eda0caad5e0d6cf1d0ceb694ecafbd-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42b73ec2b02d64e633f0aa4ea3ea8d4521eda0caad5e0d6cf1d0ceb694ecafbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42b73ec2b02d64e633f0aa4ea3ea8d4521eda0caad5e0d6cf1d0ceb694ecafbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42b73ec2b02d64e633f0aa4ea3ea8d4521eda0caad5e0d6cf1d0ceb694ecafbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-288409",
	                "Source": "/var/lib/docker/volumes/dockerenv-288409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-288409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-288409",
	                "name.minikube.sigs.k8s.io": "dockerenv-288409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19811ee2a64eec698ad19b077572d8ab098ba7bb8e075ad6f868be747651b729",
	            "SandboxKey": "/var/run/docker/netns/19811ee2a64e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-288409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:10:8e:c9:c5:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16f31e99b148dc18f25a1db2e7522aeba5af47fdd515e5ecf5d9cfad28e458d3",
	                    "EndpointID": "46ed42bc7f8e201913fafdee381707b10be10625d3f0a032abec4a1f852848b3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-288409",
	                        "d75fefaf209e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-288409 -n dockerenv-288409
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-288409 logs -n 25
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-048605 addons disable metrics-server --alsologtostderr -v=1                                                │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable cloud-spanner --alsologtostderr -v=1                                                 │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable headlamp --alsologtostderr -v=1                                                      │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ip         │ addons-048605 ip                                                                                                  │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable registry --alsologtostderr -v=1                                                      │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-048605                                    │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable registry-creds --alsologtostderr -v=1                                                │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ssh        │ addons-048605 ssh cat /opt/local-path-provisioner/pvc-8d02d742-b1cb-40fd-8405-10d79a57af25_default_test-pvc/file1 │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable inspektor-gadget --alsologtostderr -v=1                                              │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ssh        │ addons-048605 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                          │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ip         │ addons-048605 ip                                                                                                  │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable ingress-dns --alsologtostderr -v=1                                                   │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable ingress --alsologtostderr -v=1                                                       │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                         │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable yakd --alsologtostderr -v=1                                                          │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons     │ addons-048605 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ stop       │ -p addons-048605                                                                                                  │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:34 UTC │
	│ addons     │ enable dashboard -p addons-048605                                                                                 │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ addons     │ disable dashboard -p addons-048605                                                                                │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ addons     │ disable gvisor -p addons-048605                                                                                   │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ delete     │ -p addons-048605                                                                                                  │ addons-048605    │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ start      │ -p dockerenv-288409 --driver=docker  --container-runtime=containerd                                               │ dockerenv-288409 │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-288409                                                                          │ dockerenv-288409 │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:34:10
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:34:10.381355   35169 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:34:10.381453   35169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:34:10.381456   35169 out.go:374] Setting ErrFile to fd 2...
	I0926 22:34:10.381459   35169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:34:10.381621   35169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:34:10.382073   35169 out.go:368] Setting JSON to false
	I0926 22:34:10.382823   35169 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":985,"bootTime":1758925065,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:34:10.382889   35169 start.go:140] virtualization: kvm guest
	I0926 22:34:10.384406   35169 out.go:179] * [dockerenv-288409] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:34:10.385561   35169 notify.go:220] Checking for updates...
	I0926 22:34:10.385591   35169 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:34:10.386559   35169 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:34:10.387532   35169 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:34:10.388520   35169 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:34:10.389418   35169 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:34:10.390390   35169 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:34:10.391509   35169 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:34:10.413591   35169 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:34:10.413683   35169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:34:10.466041   35169 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-26 22:34:10.456858146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:34:10.466131   35169 docker.go:318] overlay module found
	I0926 22:34:10.467429   35169 out.go:179] * Using the docker driver based on user configuration
	I0926 22:34:10.468372   35169 start.go:304] selected driver: docker
	I0926 22:34:10.468376   35169 start.go:924] validating driver "docker" against <nil>
	I0926 22:34:10.468385   35169 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:34:10.468464   35169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:34:10.519985   35169 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-26 22:34:10.511440987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:34:10.520137   35169 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:34:10.520621   35169 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:34:10.520748   35169 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:34:10.522068   35169 out.go:179] * Using Docker driver with root privileges
	I0926 22:34:10.523102   35169 cni.go:84] Creating CNI manager for ""
	I0926 22:34:10.523147   35169 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 22:34:10.523152   35169 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 22:34:10.523206   35169 start.go:348] cluster config:
	{Name:dockerenv-288409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-288409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:34:10.524202   35169 out.go:179] * Starting "dockerenv-288409" primary control-plane node in "dockerenv-288409" cluster
	I0926 22:34:10.525165   35169 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0926 22:34:10.526141   35169 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:34:10.527133   35169 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 22:34:10.527161   35169 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0926 22:34:10.527169   35169 cache.go:58] Caching tarball of preloaded images
	I0926 22:34:10.527228   35169 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:34:10.527245   35169 preload.go:172] Found /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 22:34:10.527251   35169 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0926 22:34:10.527541   35169 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/config.json ...
	I0926 22:34:10.527556   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/config.json: {Name:mk024090da8f422767b767fd68a3277a49031b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:10.546078   35169 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 22:34:10.546087   35169 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 22:34:10.546103   35169 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:34:10.546135   35169 start.go:360] acquireMachinesLock for dockerenv-288409: {Name:mk0f5c5af672e42021152f6126e8db7977083766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:34:10.546217   35169 start.go:364] duration metric: took 67.17µs to acquireMachinesLock for "dockerenv-288409"
	I0926 22:34:10.546235   35169 start.go:93] Provisioning new machine with config: &{Name:dockerenv-288409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-288409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 22:34:10.546289   35169 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:34:10.547606   35169 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0926 22:34:10.547827   35169 start.go:159] libmachine.API.Create for "dockerenv-288409" (driver="docker")
	I0926 22:34:10.547847   35169 client.go:168] LocalClient.Create starting
	I0926 22:34:10.547893   35169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem
	I0926 22:34:10.547918   35169 main.go:141] libmachine: Decoding PEM data...
	I0926 22:34:10.547930   35169 main.go:141] libmachine: Parsing certificate...
	I0926 22:34:10.547972   35169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem
	I0926 22:34:10.547985   35169 main.go:141] libmachine: Decoding PEM data...
	I0926 22:34:10.547991   35169 main.go:141] libmachine: Parsing certificate...
	I0926 22:34:10.548298   35169 cli_runner.go:164] Run: docker network inspect dockerenv-288409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:34:10.563692   35169 cli_runner.go:211] docker network inspect dockerenv-288409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:34:10.563745   35169 network_create.go:284] running [docker network inspect dockerenv-288409] to gather additional debugging logs...
	I0926 22:34:10.563774   35169 cli_runner.go:164] Run: docker network inspect dockerenv-288409
	W0926 22:34:10.578391   35169 cli_runner.go:211] docker network inspect dockerenv-288409 returned with exit code 1
	I0926 22:34:10.578410   35169 network_create.go:287] error running [docker network inspect dockerenv-288409]: docker network inspect dockerenv-288409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-288409 not found
	I0926 22:34:10.578420   35169 network_create.go:289] output of [docker network inspect dockerenv-288409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-288409 not found
	
	** /stderr **
	I0926 22:34:10.578519   35169 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:34:10.594175   35169 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c121b0}
	I0926 22:34:10.594210   35169 network_create.go:124] attempt to create docker network dockerenv-288409 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:34:10.594245   35169 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-288409 dockerenv-288409
	I0926 22:34:10.645546   35169 network_create.go:108] docker network dockerenv-288409 192.168.49.0/24 created
	I0926 22:34:10.645567   35169 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-288409" container
	I0926 22:34:10.645624   35169 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:34:10.660667   35169 cli_runner.go:164] Run: docker volume create dockerenv-288409 --label name.minikube.sigs.k8s.io=dockerenv-288409 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:34:10.676393   35169 oci.go:103] Successfully created a docker volume dockerenv-288409
	I0926 22:34:10.676477   35169 cli_runner.go:164] Run: docker run --rm --name dockerenv-288409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-288409 --entrypoint /usr/bin/test -v dockerenv-288409:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:34:11.025597   35169 oci.go:107] Successfully prepared a docker volume dockerenv-288409
	I0926 22:34:11.025642   35169 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 22:34:11.025662   35169 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:34:11.025725   35169 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-288409:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:34:15.086427   35169 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-288409:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.060664671s)
	I0926 22:34:15.086446   35169 kic.go:203] duration metric: took 4.060781676s to extract preloaded images to volume ...
	W0926 22:34:15.086557   35169 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:34:15.086584   35169 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:34:15.086615   35169 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:34:15.136425   35169 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-288409 --name dockerenv-288409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-288409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-288409 --network dockerenv-288409 --ip 192.168.49.2 --volume dockerenv-288409:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:34:15.385382   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Running}}
	I0926 22:34:15.403671   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:15.420640   35169 cli_runner.go:164] Run: docker exec dockerenv-288409 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:34:15.465048   35169 oci.go:144] the created container "dockerenv-288409" has a running status.
	I0926 22:34:15.465069   35169 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa...
	I0926 22:34:15.548091   35169 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:34:15.577115   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:15.596249   35169 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:34:15.596294   35169 kic_runner.go:114] Args: [docker exec --privileged dockerenv-288409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:34:15.643090   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:15.663525   35169 machine.go:93] provisionDockerMachine start ...
	I0926 22:34:15.663610   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:15.683806   35169 main.go:141] libmachine: Using SSH client type: native
	I0926 22:34:15.684150   35169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0926 22:34:15.684160   35169 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:34:15.822272   35169 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-288409
	
	I0926 22:34:15.822296   35169 ubuntu.go:182] provisioning hostname "dockerenv-288409"
	I0926 22:34:15.822349   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:15.840262   35169 main.go:141] libmachine: Using SSH client type: native
	I0926 22:34:15.840512   35169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0926 22:34:15.840528   35169 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-288409 && echo "dockerenv-288409" | sudo tee /etc/hostname
	I0926 22:34:15.983962   35169 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-288409
	
	I0926 22:34:15.984030   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.000672   35169 main.go:141] libmachine: Using SSH client type: native
	I0926 22:34:16.000925   35169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0926 22:34:16.000951   35169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-288409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-288409/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-288409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:34:16.132808   35169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:34:16.132826   35169 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-9508/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-9508/.minikube}
	I0926 22:34:16.132842   35169 ubuntu.go:190] setting up certificates
	I0926 22:34:16.132853   35169 provision.go:84] configureAuth start
	I0926 22:34:16.132908   35169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-288409
	I0926 22:34:16.149427   35169 provision.go:143] copyHostCerts
	I0926 22:34:16.149468   35169 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem, removing ...
	I0926 22:34:16.149486   35169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem
	I0926 22:34:16.149544   35169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem (1078 bytes)
	I0926 22:34:16.149633   35169 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem, removing ...
	I0926 22:34:16.149636   35169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem
	I0926 22:34:16.149660   35169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem (1123 bytes)
	I0926 22:34:16.149720   35169 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem, removing ...
	I0926 22:34:16.149723   35169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem
	I0926 22:34:16.149744   35169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem (1679 bytes)
	I0926 22:34:16.149829   35169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem org=jenkins.dockerenv-288409 san=[127.0.0.1 192.168.49.2 dockerenv-288409 localhost minikube]
	I0926 22:34:16.309073   35169 provision.go:177] copyRemoteCerts
	I0926 22:34:16.309118   35169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:34:16.309160   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.325536   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:16.420344   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 22:34:16.444466   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0926 22:34:16.466684   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 22:34:16.489048   35169 provision.go:87] duration metric: took 356.183417ms to configureAuth
	I0926 22:34:16.489066   35169 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:34:16.489220   35169 config.go:182] Loaded profile config "dockerenv-288409": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:34:16.489226   35169 machine.go:96] duration metric: took 825.689297ms to provisionDockerMachine
	I0926 22:34:16.489232   35169 client.go:171] duration metric: took 5.941381771s to LocalClient.Create
	I0926 22:34:16.489252   35169 start.go:167] duration metric: took 5.941425787s to libmachine.API.Create "dockerenv-288409"
	I0926 22:34:16.489261   35169 start.go:293] postStartSetup for "dockerenv-288409" (driver="docker")
	I0926 22:34:16.489268   35169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:34:16.489306   35169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:34:16.489346   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.506112   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:16.601827   35169 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:34:16.604817   35169 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:34:16.604834   35169 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:34:16.604840   35169 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:34:16.604844   35169 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:34:16.604851   35169 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/addons for local assets ...
	I0926 22:34:16.604892   35169 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/files for local assets ...
	I0926 22:34:16.604907   35169 start.go:296] duration metric: took 115.641809ms for postStartSetup
	I0926 22:34:16.605157   35169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-288409
	I0926 22:34:16.621660   35169 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/config.json ...
	I0926 22:34:16.621881   35169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:34:16.621908   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.637585   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:16.728035   35169 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:34:16.732021   35169 start.go:128] duration metric: took 6.185721821s to createHost
	I0926 22:34:16.732035   35169 start.go:83] releasing machines lock for "dockerenv-288409", held for 6.185810771s
	I0926 22:34:16.732096   35169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-288409
	I0926 22:34:16.748428   35169 ssh_runner.go:195] Run: cat /version.json
	I0926 22:34:16.748460   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.748557   35169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:34:16.748603   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:16.765284   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:16.765596   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:16.854030   35169 ssh_runner.go:195] Run: systemctl --version
	I0926 22:34:16.935140   35169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:34:16.939479   35169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 22:34:16.965163   35169 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:34:16.965213   35169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:34:16.989332   35169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:34:16.989346   35169 start.go:495] detecting cgroup driver to use...
	I0926 22:34:16.989373   35169 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:34:16.989402   35169 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 22:34:17.000366   35169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 22:34:17.010390   35169 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:34:17.010448   35169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:34:17.022466   35169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:34:17.035103   35169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:34:17.099002   35169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:34:17.166920   35169 docker.go:234] disabling docker service ...
	I0926 22:34:17.166961   35169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:34:17.182708   35169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:34:17.193160   35169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:34:17.258825   35169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:34:17.320985   35169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:34:17.331339   35169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:34:17.346432   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 22:34:17.356792   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 22:34:17.365915   35169 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 22:34:17.365961   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 22:34:17.374851   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:34:17.384372   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 22:34:17.393131   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:34:17.402170   35169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:34:17.410250   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 22:34:17.419045   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 22:34:17.427649   35169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 22:34:17.436490   35169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:34:17.444277   35169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:34:17.452042   35169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:34:17.510253   35169 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 22:34:17.607084   35169 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0926 22:34:17.607141   35169 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0926 22:34:17.610614   35169 start.go:563] Will wait 60s for crictl version
	I0926 22:34:17.610649   35169 ssh_runner.go:195] Run: which crictl
	I0926 22:34:17.613785   35169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:34:17.644631   35169 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0926 22:34:17.644682   35169 ssh_runner.go:195] Run: containerd --version
	I0926 22:34:17.665189   35169 ssh_runner.go:195] Run: containerd --version
	I0926 22:34:17.688770   35169 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0926 22:34:17.689681   35169 cli_runner.go:164] Run: docker network inspect dockerenv-288409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:34:17.705580   35169 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:34:17.708961   35169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:34:17.719670   35169 kubeadm.go:883] updating cluster {Name:dockerenv-288409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-288409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s} ...
	I0926 22:34:17.719762   35169 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 22:34:17.719811   35169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:34:17.750143   35169 containerd.go:627] all images are preloaded for containerd runtime.
	I0926 22:34:17.750152   35169 containerd.go:534] Images already preloaded, skipping extraction
	I0926 22:34:17.750190   35169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:34:17.781886   35169 containerd.go:627] all images are preloaded for containerd runtime.
	I0926 22:34:17.781898   35169 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:34:17.781905   35169 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0926 22:34:17.781996   35169 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-288409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:dockerenv-288409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:34:17.782039   35169 ssh_runner.go:195] Run: sudo crictl info
	I0926 22:34:17.813696   35169 cni.go:84] Creating CNI manager for ""
	I0926 22:34:17.813703   35169 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 22:34:17.813714   35169 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:34:17.813732   35169 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-288409 NodeName:dockerenv-288409 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:34:17.813874   35169 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-288409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:34:17.813929   35169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:34:17.822519   35169 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:34:17.822566   35169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:34:17.830879   35169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0926 22:34:17.847258   35169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:34:17.865561   35169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0926 22:34:17.881378   35169 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:34:17.884470   35169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:34:17.894347   35169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:34:17.954994   35169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:34:17.983681   35169 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409 for IP: 192.168.49.2
	I0926 22:34:17.983692   35169 certs.go:195] generating shared ca certs ...
	I0926 22:34:17.983707   35169 certs.go:227] acquiring lock for ca certs: {Name:mk080975279b3a5ea38bd0bf3f7fdebf08ad146a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:17.983848   35169 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key
	I0926 22:34:17.983882   35169 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key
	I0926 22:34:17.983889   35169 certs.go:257] generating profile certs ...
	I0926 22:34:17.983934   35169 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.key
	I0926 22:34:17.983942   35169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.crt with IP's: []
	I0926 22:34:18.404438   35169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.crt ...
	I0926 22:34:18.404454   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.crt: {Name:mk96817594255b6552424600fe11b22efb8abb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:18.404612   35169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.key ...
	I0926 22:34:18.404618   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/client.key: {Name:mkd9e78dd738e2fe90f03958687f344d9e0c71e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:18.404690   35169 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key.06e91cc2
	I0926 22:34:18.404699   35169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt.06e91cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:34:19.206187   35169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt.06e91cc2 ...
	I0926 22:34:19.206203   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt.06e91cc2: {Name:mk821940e00eae1caec9748b6e62e7ef326488fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:19.206360   35169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key.06e91cc2 ...
	I0926 22:34:19.206368   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key.06e91cc2: {Name:mk00d5afc9d5a0c0ba19b4b83e46c5601ad33cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:19.206432   35169 certs.go:382] copying /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt.06e91cc2 -> /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt
	I0926 22:34:19.206500   35169 certs.go:386] copying /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key.06e91cc2 -> /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key
	I0926 22:34:19.206550   35169 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.key
	I0926 22:34:19.206560   35169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.crt with IP's: []
	I0926 22:34:19.597689   35169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.crt ...
	I0926 22:34:19.597707   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.crt: {Name:mk7ba0398406f0eee427cfbeeaffc0d8a3510864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:19.597883   35169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.key ...
	I0926 22:34:19.597892   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.key: {Name:mk44e68c614ad945afb9fe3b9d18eea645d35c5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:19.598090   35169 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 22:34:19.598124   35169 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem (1078 bytes)
	I0926 22:34:19.598145   35169 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:34:19.598161   35169 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem (1679 bytes)
	I0926 22:34:19.598776   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:34:19.622683   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:34:19.646906   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:34:19.669833   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:34:19.691549   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0926 22:34:19.713389   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 22:34:19.734728   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:34:19.756206   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/dockerenv-288409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:34:19.778215   35169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:34:19.802241   35169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:34:19.818253   35169 ssh_runner.go:195] Run: openssl version
	I0926 22:34:19.823088   35169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:34:19.833291   35169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:34:19.836397   35169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:34:19.836438   35169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:34:19.842484   35169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:34:19.850890   35169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:34:19.853920   35169 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:34:19.853961   35169 kubeadm.go:400] StartCluster: {Name:dockerenv-288409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-288409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0926 22:34:19.854019   35169 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0926 22:34:19.854062   35169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:34:19.885486   35169 cri.go:89] found id: ""
	I0926 22:34:19.885525   35169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:34:19.893655   35169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:34:19.901780   35169 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:34:19.901810   35169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:34:19.909675   35169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:34:19.909686   35169 kubeadm.go:157] found existing configuration files:
	
	I0926 22:34:19.909718   35169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:34:19.917520   35169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:34:19.917554   35169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:34:19.925063   35169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:34:19.932686   35169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:34:19.932717   35169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:34:19.940335   35169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:34:19.948170   35169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:34:19.948206   35169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:34:19.955819   35169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:34:19.963428   35169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:34:19.963457   35169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:34:19.970984   35169 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:34:20.020731   35169 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:34:20.069416   35169 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:34:28.043398   35169 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:34:28.043472   35169 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:34:28.043552   35169 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:34:28.043615   35169 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:34:28.043646   35169 kubeadm.go:318] OS: Linux
	I0926 22:34:28.043680   35169 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:34:28.043727   35169 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:34:28.043797   35169 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:34:28.043837   35169 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:34:28.043879   35169 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:34:28.043916   35169 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:34:28.043954   35169 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:34:28.043992   35169 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:34:28.044056   35169 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:34:28.044131   35169 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:34:28.044216   35169 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:34:28.044269   35169 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:34:28.045983   35169 out.go:252]   - Generating certificates and keys ...
	I0926 22:34:28.046040   35169 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:34:28.046101   35169 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:34:28.046154   35169 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:34:28.046210   35169 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:34:28.046269   35169 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:34:28.046310   35169 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:34:28.046362   35169 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:34:28.046461   35169 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [dockerenv-288409 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:34:28.046506   35169 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:34:28.046625   35169 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-288409 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:34:28.046679   35169 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:34:28.046734   35169 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:34:28.046799   35169 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:34:28.046854   35169 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:34:28.046908   35169 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:34:28.046978   35169 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:34:28.047025   35169 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:34:28.047079   35169 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:34:28.047127   35169 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:34:28.047193   35169 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:34:28.047252   35169 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:34:28.048345   35169 out.go:252]   - Booting up control plane ...
	I0926 22:34:28.048419   35169 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:34:28.048480   35169 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:34:28.048558   35169 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:34:28.048694   35169 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:34:28.048824   35169 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:34:28.048958   35169 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:34:28.049082   35169 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:34:28.049148   35169 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:34:28.049271   35169 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:34:28.049359   35169 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:34:28.049414   35169 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.788611ms
	I0926 22:34:28.049492   35169 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:34:28.049557   35169 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:34:28.049643   35169 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:34:28.049705   35169 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:34:28.049803   35169 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.010772138s
	I0926 22:34:28.049869   35169 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.427586132s
	I0926 22:34:28.049963   35169 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501847601s
	I0926 22:34:28.050117   35169 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:34:28.050300   35169 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:34:28.050375   35169 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:34:28.050537   35169 kubeadm.go:318] [mark-control-plane] Marking the node dockerenv-288409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:34:28.050593   35169 kubeadm.go:318] [bootstrap-token] Using token: g29eup.zvx9ymjpjpfj88e5
	I0926 22:34:28.051532   35169 out.go:252]   - Configuring RBAC rules ...
	I0926 22:34:28.051637   35169 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:34:28.051704   35169 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:34:28.051846   35169 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:34:28.051955   35169 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:34:28.052057   35169 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:34:28.052139   35169 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:34:28.052284   35169 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:34:28.052350   35169 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:34:28.052394   35169 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:34:28.052399   35169 kubeadm.go:318] 
	I0926 22:34:28.052444   35169 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:34:28.052446   35169 kubeadm.go:318] 
	I0926 22:34:28.052544   35169 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:34:28.052549   35169 kubeadm.go:318] 
	I0926 22:34:28.052584   35169 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:34:28.052667   35169 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:34:28.052741   35169 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:34:28.052745   35169 kubeadm.go:318] 
	I0926 22:34:28.052831   35169 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:34:28.052837   35169 kubeadm.go:318] 
	I0926 22:34:28.052895   35169 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:34:28.052897   35169 kubeadm.go:318] 
	I0926 22:34:28.052952   35169 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:34:28.053033   35169 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:34:28.053127   35169 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:34:28.053131   35169 kubeadm.go:318] 
	I0926 22:34:28.053241   35169 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:34:28.053309   35169 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:34:28.053311   35169 kubeadm.go:318] 
	I0926 22:34:28.053385   35169 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token g29eup.zvx9ymjpjpfj88e5 \
	I0926 22:34:28.053486   35169 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 \
	I0926 22:34:28.053515   35169 kubeadm.go:318] 	--control-plane 
	I0926 22:34:28.053519   35169 kubeadm.go:318] 
	I0926 22:34:28.053633   35169 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:34:28.053644   35169 kubeadm.go:318] 
	I0926 22:34:28.053733   35169 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token g29eup.zvx9ymjpjpfj88e5 \
	I0926 22:34:28.053850   35169 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 
	I0926 22:34:28.053860   35169 cni.go:84] Creating CNI manager for ""
	I0926 22:34:28.053868   35169 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 22:34:28.055023   35169 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0926 22:34:28.055842   35169 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 22:34:28.059847   35169 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 22:34:28.059856   35169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0926 22:34:28.077619   35169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 22:34:28.270556   35169 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:34:28.270633   35169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:34:28.270664   35169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-288409 minikube.k8s.io/updated_at=2025_09_26T22_34_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=dockerenv-288409 minikube.k8s.io/primary=true
	I0926 22:34:28.278127   35169 ops.go:34] apiserver oom_adj: -16
	I0926 22:34:28.370521   35169 kubeadm.go:1113] duration metric: took 99.958235ms to wait for elevateKubeSystemPrivileges
	I0926 22:34:28.370554   35169 kubeadm.go:402] duration metric: took 8.516595515s to StartCluster
	I0926 22:34:28.370576   35169 settings.go:142] acquiring lock: {Name:mke935858c08b57824075e52fb45232e2555a3b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:28.370633   35169 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:34:28.371238   35169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/kubeconfig: {Name:mka72bf89c026ab3e09a0062db4219353845dcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:34:28.371430   35169 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 22:34:28.371451   35169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:34:28.371472   35169 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 22:34:28.371534   35169 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-288409"
	I0926 22:34:28.371544   35169 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-288409"
	I0926 22:34:28.371577   35169 host.go:66] Checking if "dockerenv-288409" exists ...
	I0926 22:34:28.371574   35169 addons.go:69] Setting default-storageclass=true in profile "dockerenv-288409"
	I0926 22:34:28.371595   35169 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-288409"
	I0926 22:34:28.371656   35169 config.go:182] Loaded profile config "dockerenv-288409": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:34:28.371943   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:28.371992   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:28.372719   35169 out.go:179] * Verifying Kubernetes components...
	I0926 22:34:28.373679   35169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:34:28.394615   35169 addons.go:238] Setting addon default-storageclass=true in "dockerenv-288409"
	I0926 22:34:28.394648   35169 host.go:66] Checking if "dockerenv-288409" exists ...
	I0926 22:34:28.395179   35169 cli_runner.go:164] Run: docker container inspect dockerenv-288409 --format={{.State.Status}}
	I0926 22:34:28.395856   35169 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:34:28.396970   35169 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:34:28.396978   35169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:34:28.397014   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:28.418421   35169 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:34:28.418434   35169 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:34:28.418485   35169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-288409
	I0926 22:34:28.422142   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:28.440920   35169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/dockerenv-288409/id_rsa Username:docker}
	I0926 22:34:28.454189   35169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:34:28.492084   35169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:34:28.530224   35169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:34:28.558565   35169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:34:28.604180   35169 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:34:28.604906   35169 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:34:28.604955   35169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:34:28.780000   35169 api_server.go:72] duration metric: took 408.548638ms to wait for apiserver process to appear ...
	I0926 22:34:28.780013   35169 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:34:28.780031   35169 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:34:28.785435   35169 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 22:34:28.786277   35169 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:34:28.786298   35169 addons.go:514] duration metric: took 414.834334ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 22:34:28.786936   35169 api_server.go:141] control plane version: v1.34.0
	I0926 22:34:28.786947   35169 api_server.go:131] duration metric: took 6.929906ms to wait for apiserver health ...
	I0926 22:34:28.786953   35169 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:34:28.789080   35169 system_pods.go:59] 5 kube-system pods found
	I0926 22:34:28.789102   35169 system_pods.go:61] "etcd-dockerenv-288409" [2df10d92-0afe-4dc7-8e0a-8f0e1dfd2910] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:34:28.789109   35169 system_pods.go:61] "kube-apiserver-dockerenv-288409" [79739b95-a990-47d3-99c8-d5a8b7922428] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:34:28.789115   35169 system_pods.go:61] "kube-controller-manager-dockerenv-288409" [8665119f-0062-4e45-9b0a-f7e9c6254104] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:34:28.789124   35169 system_pods.go:61] "kube-scheduler-dockerenv-288409" [3bc6684a-9a86-4f64-b01b-8dba6e250edf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:34:28.789130   35169 system_pods.go:61] "storage-provisioner" [829a3b28-be70-4704-9ff0-0f13173e9a69] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0926 22:34:28.789136   35169 system_pods.go:74] duration metric: took 2.178159ms to wait for pod list to return data ...
	I0926 22:34:28.789148   35169 kubeadm.go:586] duration metric: took 417.699448ms to wait for: map[apiserver:true system_pods:true]
	I0926 22:34:28.789160   35169 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:34:28.790910   35169 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:34:28.790921   35169 node_conditions.go:123] node cpu capacity is 8
	I0926 22:34:28.790930   35169 node_conditions.go:105] duration metric: took 1.767668ms to run NodePressure ...
	I0926 22:34:28.790939   35169 start.go:241] waiting for startup goroutines ...
	I0926 22:34:29.106891   35169 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-288409" context rescaled to 1 replicas
	I0926 22:34:29.106921   35169 start.go:246] waiting for cluster config update ...
	I0926 22:34:29.106934   35169 start.go:255] writing updated cluster config ...
	I0926 22:34:29.107232   35169 ssh_runner.go:195] Run: rm -f paused
	I0926 22:34:29.150275   35169 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:34:29.151857   35169 out.go:179] * Done! kubectl is now configured to use "dockerenv-288409" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	747fbf3b16772       6e38f40d628db       10 seconds ago      Running             storage-provisioner       0                   c568bedf72619       storage-provisioner
	b8e256beac427       409467f978b4a       10 seconds ago      Running             kindnet-cni               0                   01618a7e9e0bf       kindnet-j7zqb
	00c07918d85dc       df0860106674d       10 seconds ago      Running             kube-proxy                0                   58f1e505db5ac       kube-proxy-b8w46
	07f5581a290d4       46169d968e920       20 seconds ago      Running             kube-scheduler            0                   b5acd76ea2e75       kube-scheduler-dockerenv-288409
	a122dc94a3fdc       5f1f5298c888d       20 seconds ago      Running             etcd                      0                   26fc76b5fa3d9       etcd-dockerenv-288409
	6ab780ebce19e       a0af72f2ec6d6       20 seconds ago      Running             kube-controller-manager   0                   64dd925d3b847       kube-controller-manager-dockerenv-288409
	15bac67683448       90550c43ad2bc       20 seconds ago      Running             kube-apiserver            0                   dba1bcd26b7e0       kube-apiserver-dockerenv-288409
	
	
	==> containerd <==
	Sep 26 22:34:23 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:23.499829346Z" level=info msg="StartContainer for \"15bac676834485c52f16bf244d0549ad3ef5ec24a71542f0a164b6e89a7a3f5b\" returns successfully"
	Sep 26 22:34:23 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:23.499842185Z" level=info msg="StartContainer for \"07f5581a290d4cf0a51f67b7de1bcd59271ed696ab242fc2115709f2274c98a5\" returns successfully"
	Sep 26 22:34:23 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:23.509667892Z" level=info msg="StartContainer for \"6ab780ebce19ebe083d69344e2b93b4c8fbf614e07cd2e0dc5f281ee27f39465\" returns successfully"
	Sep 26 22:34:23 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:23.509764738Z" level=info msg="StartContainer for \"a122dc94a3fdc725c11720dd2aa12be971bdefa53aa2ab2d310cfef11b0accd5\" returns successfully"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.358547587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b8w46,Uid:a1ecac65-9974-4c9c-a3fa-2a23b59e0583,Namespace:kube-system,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.371722478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-j7zqb,Uid:dbd80d7c-ac73-4ace-b92a-c92e83855505,Namespace:kube-system,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.420470242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b8w46,Uid:a1ecac65-9974-4c9c-a3fa-2a23b59e0583,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f1e505db5ac5e5680b84f3a87296265431e3359d5a1e3d839dfb91f6295765\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.425418458Z" level=info msg="CreateContainer within sandbox \"58f1e505db5ac5e5680b84f3a87296265431e3359d5a1e3d839dfb91f6295765\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.434461484Z" level=info msg="CreateContainer within sandbox \"58f1e505db5ac5e5680b84f3a87296265431e3359d5a1e3d839dfb91f6295765\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00c07918d85dc67ccd690abe73f0f796c2ba48ffb3bc18370c690feb73d26a6a\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.434956417Z" level=info msg="StartContainer for \"00c07918d85dc67ccd690abe73f0f796c2ba48ffb3bc18370c690feb73d26a6a\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.454839353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ks8sh,Uid:1a3a32b2-531c-4c7e-80f8-1fb4c90a7113,Namespace:kube-system,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.472150192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ks8sh,Uid:1a3a32b2-531c-4c7e-80f8-1fb4c90a7113,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\": failed to find network info for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.503053983Z" level=info msg="StartContainer for \"00c07918d85dc67ccd690abe73f0f796c2ba48ffb3bc18370c690feb73d26a6a\" returns successfully"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.707991339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-j7zqb,Uid:dbd80d7c-ac73-4ace-b92a-c92e83855505,Namespace:kube-system,Attempt:0,} returns sandbox id \"01618a7e9e0bff3c2e3cfcc5324a7f4d594bd9eae2dc9eb24fad476daa34bf49\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.712339857Z" level=info msg="CreateContainer within sandbox \"01618a7e9e0bff3c2e3cfcc5324a7f4d594bd9eae2dc9eb24fad476daa34bf49\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.715872807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:829a3b28-be70-4704-9ff0-0f13173e9a69,Namespace:kube-system,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.724323936Z" level=info msg="CreateContainer within sandbox \"01618a7e9e0bff3c2e3cfcc5324a7f4d594bd9eae2dc9eb24fad476daa34bf49\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"b8e256beac4272358658f856de9d4ee060aa68822cd69e36db42c39cae7f8143\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.724815585Z" level=info msg="StartContainer for \"b8e256beac4272358658f856de9d4ee060aa68822cd69e36db42c39cae7f8143\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.806439444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:829a3b28-be70-4704-9ff0-0f13173e9a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"c568bedf72619cc91850a3b263cfae3c18910b6726ebec3ac0332006959aa173\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.811777598Z" level=info msg="CreateContainer within sandbox \"c568bedf72619cc91850a3b263cfae3c18910b6726ebec3ac0332006959aa173\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.813877410Z" level=info msg="StartContainer for \"b8e256beac4272358658f856de9d4ee060aa68822cd69e36db42c39cae7f8143\" returns successfully"
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.820485651Z" level=info msg="CreateContainer within sandbox \"c568bedf72619cc91850a3b263cfae3c18910b6726ebec3ac0332006959aa173\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"747fbf3b16772930c7f5f29583387afea27865f28e72502aecbc606127778af9\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.820920515Z" level=info msg="StartContainer for \"747fbf3b16772930c7f5f29583387afea27865f28e72502aecbc606127778af9\""
	Sep 26 22:34:33 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:33.871076376Z" level=info msg="StartContainer for \"747fbf3b16772930c7f5f29583387afea27865f28e72502aecbc606127778af9\" returns successfully"
	Sep 26 22:34:37 dockerenv-288409 containerd[760]: time="2025-09-26T22:34:37.490535781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               dockerenv-288409
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-288409
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=dockerenv-288409
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_34_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:34:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-288409
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:34:37 +0000   Fri, 26 Sep 2025 22:34:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:34:37 +0000   Fri, 26 Sep 2025 22:34:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:34:37 +0000   Fri, 26 Sep 2025 22:34:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:34:37 +0000   Fri, 26 Sep 2025 22:34:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-288409
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 507c4e7b5f934498b1d9e3008fbd9b6d
	  System UUID:                f2a19196-8d22-41a2-9930-32d776aeedaa
	  Boot ID:                    d6777c8b-c717-4851-a50e-a884fc659348
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ks8sh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11s
	  kube-system                 etcd-dockerenv-288409                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17s
	  kube-system                 kindnet-j7zqb                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-288409             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-288409    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-b8w46                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-288409             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node dockerenv-288409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node dockerenv-288409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node dockerenv-288409 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s                kubelet          Node dockerenv-288409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s                kubelet          Node dockerenv-288409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s                kubelet          Node dockerenv-288409 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node dockerenv-288409 event: Registered Node dockerenv-288409 in Controller
	
	
	==> dmesg <==
	[Sep26 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001877] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.387443] i8042: Warning: Keylock active
	[  +0.011484] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004689] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000998] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000986] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001004] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001049] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001043] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448971] block sda: the capability attribute has been deprecated.
	[  +0.076726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021403] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.907524] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a122dc94a3fdc725c11720dd2aa12be971bdefa53aa2ab2d310cfef11b0accd5] <==
	{"level":"warn","ts":"2025-09-26T22:34:24.310147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.317113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.325624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.331506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.338728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.345090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.350780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.357241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.362861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.369010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.376904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.383174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.389681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.395736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.401627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.411879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.418865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.425357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.431514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.437293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.443245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.448870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.463389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.470086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:34:24.477539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35148","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:34:44 up 16 min,  0 users,  load average: 1.37, 1.14, 0.60
	Linux dockerenv-288409 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b8e256beac4272358658f856de9d4ee060aa68822cd69e36db42c39cae7f8143] <==
	I0926 22:34:33.995787       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:34:33.996024       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:34:33.996120       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:34:33.996134       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:34:33.996152       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:34:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:34:34.197317       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:34:34.197354       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:34:34.197369       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:34:34.197578       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0926 22:34:34.497964       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:34:34.497982       1 metrics.go:72] Registering metrics
	I0926 22:34:34.498042       1 controller.go:711] "Syncing nftables rules"
	I0926 22:34:44.197862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:44.197896       1 main.go:301] handling current node
	
	
	==> kube-apiserver [15bac676834485c52f16bf244d0549ad3ef5ec24a71542f0a164b6e89a7a3f5b] <==
	I0926 22:34:25.011414       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0926 22:34:25.011531       1 aggregator.go:171] initial CRD sync complete...
	I0926 22:34:25.011552       1 autoregister_controller.go:144] Starting autoregister controller
	I0926 22:34:25.011558       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0926 22:34:25.011565       1 cache.go:39] Caches are synced for autoregister controller
	I0926 22:34:25.013036       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:34:25.013092       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0926 22:34:25.031236       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 22:34:25.909631       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0926 22:34:25.913163       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0926 22:34:25.913180       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0926 22:34:26.303664       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 22:34:26.333529       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 22:34:26.412387       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0926 22:34:26.417862       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0926 22:34:26.418568       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 22:34:26.421981       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 22:34:26.934924       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0926 22:34:27.441834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0926 22:34:27.448878       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0926 22:34:27.456464       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0926 22:34:32.837939       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:34:32.840786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:34:32.936979       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0926 22:34:33.036379       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6ab780ebce19ebe083d69344e2b93b4c8fbf614e07cd2e0dc5f281ee27f39465] <==
	I0926 22:34:31.896673       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-288409" podCIDRs=["10.244.0.0/24"]
	I0926 22:34:31.933357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:34:31.933500       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0926 22:34:31.934568       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:34:31.934590       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:34:31.934594       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:34:31.934620       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:34:31.934699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:34:31.934724       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:34:31.934729       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:34:31.934710       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:34:31.934717       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:34:31.934819       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0926 22:34:31.934803       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:34:31.934783       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:34:31.934881       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:34:31.934973       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:34:31.938196       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:34:31.938327       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 22:34:31.940573       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:34:31.941757       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:34:31.942902       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0926 22:34:31.949172       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:34:31.950374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:34:31.954529       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [00c07918d85dc67ccd690abe73f0f796c2ba48ffb3bc18370c690feb73d26a6a] <==
	I0926 22:34:33.531801       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:34:33.597197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:34:33.697887       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:34:33.697916       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:34:33.698046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:34:33.721490       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:34:33.721533       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:34:33.727458       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:34:33.727925       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:34:33.727960       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:34:33.729511       1 config.go:200] "Starting service config controller"
	I0926 22:34:33.729522       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:34:33.729533       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:34:33.729538       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:34:33.729579       1 config.go:309] "Starting node config controller"
	I0926 22:34:33.729585       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:34:33.729592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:34:33.729850       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:34:33.729868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:34:33.829691       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:34:33.829691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:34:33.830102       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [07f5581a290d4cf0a51f67b7de1bcd59271ed696ab242fc2115709f2274c98a5] <==
	I0926 22:34:25.549383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:34:25.551092       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:34:25.551121       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:34:25.551388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0926 22:34:25.552775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:34:25.553507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0926 22:34:25.554403       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:34:25.554839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:34:25.554865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:34:25.555047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:34:25.555233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:34:25.555370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:34:25.555467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:34:25.555477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:34:25.555475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:34:25.555585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:34:25.555634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:34:25.555656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:34:25.555776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:34:25.555921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:34:25.556070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:34:25.556471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:34:25.556497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:34:25.556482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0926 22:34:26.851829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.070493    1524 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.070529    1524 projected.go:196] Error preparing data for projected volume kube-api-access-64wn8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.070658    1524 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/829a3b28-be70-4704-9ff0-0f13173e9a69-kube-api-access-64wn8 podName:829a3b28-be70-4704-9ff0-0f13173e9a69 nodeName:}" failed. No retries permitted until 2025-09-26 22:34:32.570626732 +0000 UTC m=+5.393883412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-64wn8" (UniqueName: "kubernetes.io/projected/829a3b28-be70-4704-9ff0-0f13173e9a69-kube-api-access-64wn8") pod "storage-provisioner" (UID: "829a3b28-be70-4704-9ff0-0f13173e9a69") : configmap "kube-root-ca.crt" not found
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.669723    1524 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.669767    1524 projected.go:196] Error preparing data for projected volume kube-api-access-64wn8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 26 22:34:32 dockerenv-288409 kubelet[1524]: E0926 22:34:32.669838    1524 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/829a3b28-be70-4704-9ff0-0f13173e9a69-kube-api-access-64wn8 podName:829a3b28-be70-4704-9ff0-0f13173e9a69 nodeName:}" failed. No retries permitted until 2025-09-26 22:34:33.669820137 +0000 UTC m=+6.493076811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-64wn8" (UniqueName: "kubernetes.io/projected/829a3b28-be70-4704-9ff0-0f13173e9a69-kube-api-access-64wn8") pod "storage-provisioner" (UID: "829a3b28-be70-4704-9ff0-0f13173e9a69") : configmap "kube-root-ca.crt" not found
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071801    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzpq\" (UniqueName: \"kubernetes.io/projected/a1ecac65-9974-4c9c-a3fa-2a23b59e0583-kube-api-access-mfzpq\") pod \"kube-proxy-b8w46\" (UID: \"a1ecac65-9974-4c9c-a3fa-2a23b59e0583\") " pod="kube-system/kube-proxy-b8w46"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071832    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbd80d7c-ac73-4ace-b92a-c92e83855505-lib-modules\") pod \"kindnet-j7zqb\" (UID: \"dbd80d7c-ac73-4ace-b92a-c92e83855505\") " pod="kube-system/kindnet-j7zqb"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071861    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1ecac65-9974-4c9c-a3fa-2a23b59e0583-kube-proxy\") pod \"kube-proxy-b8w46\" (UID: \"a1ecac65-9974-4c9c-a3fa-2a23b59e0583\") " pod="kube-system/kube-proxy-b8w46"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071877    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1ecac65-9974-4c9c-a3fa-2a23b59e0583-lib-modules\") pod \"kube-proxy-b8w46\" (UID: \"a1ecac65-9974-4c9c-a3fa-2a23b59e0583\") " pod="kube-system/kube-proxy-b8w46"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071932    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dbd80d7c-ac73-4ace-b92a-c92e83855505-cni-cfg\") pod \"kindnet-j7zqb\" (UID: \"dbd80d7c-ac73-4ace-b92a-c92e83855505\") " pod="kube-system/kindnet-j7zqb"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071972    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbd80d7c-ac73-4ace-b92a-c92e83855505-xtables-lock\") pod \"kindnet-j7zqb\" (UID: \"dbd80d7c-ac73-4ace-b92a-c92e83855505\") " pod="kube-system/kindnet-j7zqb"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.071999    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1ecac65-9974-4c9c-a3fa-2a23b59e0583-xtables-lock\") pod \"kube-proxy-b8w46\" (UID: \"a1ecac65-9974-4c9c-a3fa-2a23b59e0583\") " pod="kube-system/kube-proxy-b8w46"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.072028    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78pk\" (UniqueName: \"kubernetes.io/projected/dbd80d7c-ac73-4ace-b92a-c92e83855505-kube-api-access-x78pk\") pod \"kindnet-j7zqb\" (UID: \"dbd80d7c-ac73-4ace-b92a-c92e83855505\") " pod="kube-system/kindnet-j7zqb"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.172892    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5d4x\" (UniqueName: \"kubernetes.io/projected/1a3a32b2-531c-4c7e-80f8-1fb4c90a7113-kube-api-access-q5d4x\") pod \"coredns-66bc5c9577-ks8sh\" (UID: \"1a3a32b2-531c-4c7e-80f8-1fb4c90a7113\") " pod="kube-system/coredns-66bc5c9577-ks8sh"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: I0926 22:34:33.173051    1524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a3a32b2-531c-4c7e-80f8-1fb4c90a7113-config-volume\") pod \"coredns-66bc5c9577-ks8sh\" (UID: \"1a3a32b2-531c-4c7e-80f8-1fb4c90a7113\") " pod="kube-system/coredns-66bc5c9577-ks8sh"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: E0926 22:34:33.472345    1524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\": failed to find network info for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\""
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: E0926 22:34:33.472413    1524 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\": failed to find network info for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\"" pod="kube-system/coredns-66bc5c9577-ks8sh"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: E0926 22:34:33.472446    1524 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\": failed to find network info for sandbox \"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\"" pod="kube-system/coredns-66bc5c9577-ks8sh"
	Sep 26 22:34:33 dockerenv-288409 kubelet[1524]: E0926 22:34:33.472510    1524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ks8sh_kube-system(1a3a32b2-531c-4c7e-80f8-1fb4c90a7113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ks8sh_kube-system(1a3a32b2-531c-4c7e-80f8-1fb4c90a7113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\\\": failed to find network info for sandbox \\\"8e7469dea77034071c540151bc35e8fa275f94c06d5888615f95f61f1ad72c82\\\"\"" pod="kube-system/coredns-66bc5c9577-ks8sh" podUID="1a3a32b2-531c-4c7e-80f8-1fb4c90a7113"
	Sep 26 22:34:34 dockerenv-288409 kubelet[1524]: I0926 22:34:34.281112    1524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.281092756 podStartE2EDuration="6.281092756s" podCreationTimestamp="2025-09-26 22:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-26 22:34:34.28089295 +0000 UTC m=+7.104149633" watchObservedRunningTime="2025-09-26 22:34:34.281092756 +0000 UTC m=+7.104349438"
	Sep 26 22:34:34 dockerenv-288409 kubelet[1524]: I0926 22:34:34.297489    1524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b8w46" podStartSLOduration=1.297468821 podStartE2EDuration="1.297468821s" podCreationTimestamp="2025-09-26 22:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-26 22:34:34.297218125 +0000 UTC m=+7.120474810" watchObservedRunningTime="2025-09-26 22:34:34.297468821 +0000 UTC m=+7.120725503"
	Sep 26 22:34:37 dockerenv-288409 kubelet[1524]: I0926 22:34:37.489925    1524 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 26 22:34:37 dockerenv-288409 kubelet[1524]: I0926 22:34:37.490798    1524 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 26 22:34:37 dockerenv-288409 kubelet[1524]: I0926 22:34:37.988106    1524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-j7zqb" podStartSLOduration=4.988084184 podStartE2EDuration="4.988084184s" podCreationTimestamp="2025-09-26 22:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-26 22:34:34.304902857 +0000 UTC m=+7.128159540" watchObservedRunningTime="2025-09-26 22:34:37.988084184 +0000 UTC m=+10.811340868"
	
	
	==> storage-provisioner [747fbf3b16772930c7f5f29583387afea27865f28e72502aecbc606127778af9] <==
	I0926 22:34:33.881137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-288409 -n dockerenv-288409
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-288409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ks8sh
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-288409 describe pod coredns-66bc5c9577-ks8sh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-288409 describe pod coredns-66bc5c9577-ks8sh: exit status 1 (57.38141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ks8sh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-288409 describe pod coredns-66bc5c9577-ks8sh: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-288409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-288409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-288409: (2.236048884s)
--- FAIL: TestDockerEnvContainerd (36.75s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-459506 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-459506 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-459506 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-459506 --alsologtostderr -v=1] stderr:
I0926 22:42:47.034344   62795 out.go:360] Setting OutFile to fd 1 ...
I0926 22:42:47.034496   62795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:42:47.034507   62795 out.go:374] Setting ErrFile to fd 2...
I0926 22:42:47.034512   62795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:42:47.034679   62795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:42:47.034953   62795 mustload.go:65] Loading cluster: functional-459506
I0926 22:42:47.035258   62795 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:42:47.035606   62795 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:42:47.052259   62795 host.go:66] Checking if "functional-459506" exists ...
I0926 22:42:47.052499   62795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0926 22:42:47.106235   62795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:47.096134917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0926 22:42:47.106399   62795 api_server.go:166] Checking apiserver status ...
I0926 22:42:47.106450   62795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0926 22:42:47.106499   62795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:42:47.122784   62795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:42:47.219728   62795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5084/cgroup
W0926 22:42:47.228562   62795 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5084/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0926 22:42:47.228598   62795 ssh_runner.go:195] Run: ls
I0926 22:42:47.231863   62795 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0926 22:42:47.236813   62795 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0926 22:42:47.236853   62795 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0926 22:42:47.237005   62795 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:42:47.237022   62795 addons.go:69] Setting dashboard=true in profile "functional-459506"
I0926 22:42:47.237030   62795 addons.go:238] Setting addon dashboard=true in "functional-459506"
I0926 22:42:47.237064   62795 host.go:66] Checking if "functional-459506" exists ...
I0926 22:42:47.237386   62795 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:42:47.255977   62795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0926 22:42:47.257091   62795 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0926 22:42:47.258098   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0926 22:42:47.258112   62795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0926 22:42:47.258151   62795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:42:47.274030   62795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:42:47.377670   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0926 22:42:47.377691   62795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0926 22:42:47.394704   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0926 22:42:47.394720   62795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0926 22:42:47.411774   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0926 22:42:47.411795   62795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0926 22:42:47.429167   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0926 22:42:47.429188   62795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0926 22:42:47.446039   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0926 22:42:47.446060   62795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0926 22:42:47.463181   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0926 22:42:47.463202   62795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0926 22:42:47.480267   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0926 22:42:47.480289   62795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0926 22:42:47.497586   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0926 22:42:47.497607   62795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0926 22:42:47.514419   62795 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:42:47.514443   62795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0926 22:42:47.531374   62795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:42:47.927053   62795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-459506 addons enable metrics-server

                                                
                                                
I0926 22:42:47.928040   62795 addons.go:201] Writing out "functional-459506" config to set dashboard=true...
W0926 22:42:47.928239   62795 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0926 22:42:47.928849   62795 kapi.go:59] client config for functional-459506: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.key", CAFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0926 22:42:47.929255   62795 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0926 22:42:47.929272   62795 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0926 22:42:47.929277   62795 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0926 22:42:47.929285   62795 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0926 22:42:47.929289   62795 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0926 22:42:47.935723   62795 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  29b50dae-8106-495b-a6c0-5c3d2148912c 1205 0 2025-09-26 22:42:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-26 22:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.131.25,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.131.25],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0926 22:42:47.935874   62795 out.go:285] * Launching proxy ...
* Launching proxy ...
I0926 22:42:47.935927   62795 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-459506 proxy --port 36195]
I0926 22:42:47.936203   62795 dashboard.go:157] Waiting for kubectl to output host:port ...
I0926 22:42:47.978781   62795 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0926 22:42:47.978848   62795 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0926 22:42:47.986525   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d7d955d-ffe8-4ea3-9dc8-ff1ca7617cd1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:47 GMT]] Body:0xc0007fb140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001528c0 TLS:<nil>}
I0926 22:42:47.986607   62795 retry.go:31] will retry after 112.66µs: Temporary Error: unexpected response code: 503
I0926 22:42:47.989543   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b329603f-61ef-4ce1-a793-9473e9fc886e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:47 GMT]] Body:0xc000251c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316140 TLS:<nil>}
I0926 22:42:47.989592   62795 retry.go:31] will retry after 87.94µs: Temporary Error: unexpected response code: 503
I0926 22:42:47.992397   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc809eb6-cce6-4977-9046-82948b3ea65b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:47 GMT]] Body:0xc0008d9d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e000 TLS:<nil>}
I0926 22:42:47.992435   62795 retry.go:31] will retry after 176.194µs: Temporary Error: unexpected response code: 503
I0926 22:42:47.995326   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[464a54a2-a389-4eed-9f66-aae1f309e834] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:47 GMT]] Body:0xc0007fb2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152a00 TLS:<nil>}
I0926 22:42:47.995369   62795 retry.go:31] will retry after 371.556µs: Temporary Error: unexpected response code: 503
I0926 22:42:47.998295   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[963d473c-f910-495c-bf11-936affa053a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:47 GMT]] Body:0xc0008d9e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316280 TLS:<nil>}
I0926 22:42:47.998330   62795 retry.go:31] will retry after 269.127µs: Temporary Error: unexpected response code: 503
I0926 22:42:48.001039   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea57d606-4edb-4976-8d44-4307df54963b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fb400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152b40 TLS:<nil>}
I0926 22:42:48.001074   62795 retry.go:31] will retry after 855.105µs: Temporary Error: unexpected response code: 503
I0926 22:42:48.003745   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[653baff7-dd73-447c-890e-0b45a70a6e8c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0008d9ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003163c0 TLS:<nil>}
I0926 22:42:48.003800   62795 retry.go:31] will retry after 677.194µs: Temporary Error: unexpected response code: 503
I0926 22:42:48.006493   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b79473ad-442b-4726-b2a8-c1d2aa59a687] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fb580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152c80 TLS:<nil>}
I0926 22:42:48.006535   62795 retry.go:31] will retry after 1.530318ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.010286   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92919487-cd2e-4952-ad58-ffb7d3cd02f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000874780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316500 TLS:<nil>}
I0926 22:42:48.010317   62795 retry.go:31] will retry after 2.510122ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.015124   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[682d5765-32a5-4ed9-a61d-32f9ed8fb466] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fb680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152dc0 TLS:<nil>}
I0926 22:42:48.015152   62795 retry.go:31] will retry after 2.744095ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.019955   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[362eb775-0769-494f-bec0-cabedc7c7112] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000251d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316640 TLS:<nil>}
I0926 22:42:48.019987   62795 retry.go:31] will retry after 7.326735ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.029664   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ac92190-5969-4b5f-bfe8-5fd595842e85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000874880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316780 TLS:<nil>}
I0926 22:42:48.029701   62795 retry.go:31] will retry after 12.439254ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.044443   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e8291dcf-4879-46a7-a72a-85f0e929ad96] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fb8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152f00 TLS:<nil>}
I0926 22:42:48.044470   62795 retry.go:31] will retry after 11.546285ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.058346   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34f29a06-c459-435f-890f-3145beb218d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fb980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003168c0 TLS:<nil>}
I0926 22:42:48.058383   62795 retry.go:31] will retry after 16.883209ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.077192   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fccf5e43-b0ef-4acb-9b3a-9981062ad0ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000251e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316a00 TLS:<nil>}
I0926 22:42:48.077232   62795 retry.go:31] will retry after 21.822227ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.101058   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38e2790b-8ec4-4f07-b95c-d5e69d17d2c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fbac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e140 TLS:<nil>}
I0926 22:42:48.101111   62795 retry.go:31] will retry after 22.871151ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.125982   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86a5efeb-50d6-4b25-a729-17d864e53058] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000874b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316b40 TLS:<nil>}
I0926 22:42:48.126028   62795 retry.go:31] will retry after 82.966432ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.211553   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[460b7ec2-c5f5-48b7-82a7-46d6206f9abc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc00162a040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000153040 TLS:<nil>}
I0926 22:42:48.211595   62795 retry.go:31] will retry after 146.976875ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.361110   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e7ac0d0-4eb7-4cf1-a8c8-682aa3d5e926] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc0007fbb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e280 TLS:<nil>}
I0926 22:42:48.361160   62795 retry.go:31] will retry after 141.533505ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.505601   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[55806a71-22d9-476e-a87e-f16d131bbcda] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc00162a140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316c80 TLS:<nil>}
I0926 22:42:48.505654   62795 retry.go:31] will retry after 200.09661ms: Temporary Error: unexpected response code: 503
I0926 22:42:48.708106   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a3ec80b9-a3e2-49e3-a841-c59fa1355d57] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:48 GMT]] Body:0xc000874d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e3c0 TLS:<nil>}
I0926 22:42:48.708162   62795 retry.go:31] will retry after 429.697272ms: Temporary Error: unexpected response code: 503
I0926 22:42:49.140910   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ffb8980-c102-4754-9dc6-badb4b2aeb06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:49 GMT]] Body:0xc0007fbcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000153180 TLS:<nil>}
I0926 22:42:49.140970   62795 retry.go:31] will retry after 364.470865ms: Temporary Error: unexpected response code: 503
I0926 22:42:49.508801   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce8f31dd-828d-48f2-9d58-6ed73595d58a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:49 GMT]] Body:0xc000874e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316dc0 TLS:<nil>}
I0926 22:42:49.508862   62795 retry.go:31] will retry after 489.156955ms: Temporary Error: unexpected response code: 503
I0926 22:42:50.000760   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[006c1721-3bcd-4ca6-b1f7-d5df98c44fd2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:50 GMT]] Body:0xc00162a240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001532c0 TLS:<nil>}
I0926 22:42:50.000824   62795 retry.go:31] will retry after 1.249427772s: Temporary Error: unexpected response code: 503
I0926 22:42:51.253546   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c52490a8-2e2e-4254-b7cf-87c01436d15d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:51 GMT]] Body:0xc00162a300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e500 TLS:<nil>}
I0926 22:42:51.253610   62795 retry.go:31] will retry after 1.89550157s: Temporary Error: unexpected response code: 503
I0926 22:42:53.152643   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7694e40-724b-41ea-b4c0-0db3d9f30c20] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:53 GMT]] Body:0xc00162a380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e640 TLS:<nil>}
I0926 22:42:53.152718   62795 retry.go:31] will retry after 3.351457448s: Temporary Error: unexpected response code: 503
I0926 22:42:56.509810   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27a5a78f-f0a0-4a61-b958-4b3d49390191] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:56 GMT]] Body:0xc0016ac0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e780 TLS:<nil>}
I0926 22:42:56.509895   62795 retry.go:31] will retry after 5.181183259s: Temporary Error: unexpected response code: 503
I0926 22:43:01.695251   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35ee4059-771f-45e9-a06a-d8113650fcf5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:43:01 GMT]] Body:0xc0016ac140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000153400 TLS:<nil>}
I0926 22:43:01.695302   62795 retry.go:31] will retry after 6.119799317s: Temporary Error: unexpected response code: 503
I0926 22:43:07.818571   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d9c9a05-cb9a-47e4-9a5f-68f221a2aa18] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:43:07 GMT]] Body:0xc000875040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316f00 TLS:<nil>}
I0926 22:43:07.818625   62795 retry.go:31] will retry after 6.269505579s: Temporary Error: unexpected response code: 503
I0926 22:43:14.092296   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f03fb64-c80f-406f-9c38-0008ab585982] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:43:14 GMT]] Body:0xc0016ac240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000153540 TLS:<nil>}
I0926 22:43:14.092354   62795 retry.go:31] will retry after 11.854059152s: Temporary Error: unexpected response code: 503
I0926 22:43:25.952307   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed57fbce-48a7-40cc-8cf0-5b687559c1b3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:43:25 GMT]] Body:0xc00162a480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000153680 TLS:<nil>}
I0926 22:43:25.952378   62795 retry.go:31] will retry after 21.402552069s: Temporary Error: unexpected response code: 503
I0926 22:43:47.358310   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7448afb7-fffa-4d25-aea4-edad64648dcb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:43:47 GMT]] Body:0xc0008752c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160e8c0 TLS:<nil>}
I0926 22:43:47.358378   62795 retry.go:31] will retry after 17.649100767s: Temporary Error: unexpected response code: 503
I0926 22:44:05.012135   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b116b6e2-2060-4b89-9a62-44751af08555] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:44:05 GMT]] Body:0xc000875340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00160ea00 TLS:<nil>}
I0926 22:44:05.012191   62795 retry.go:31] will retry after 33.204498396s: Temporary Error: unexpected response code: 503
I0926 22:44:38.219784   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[55a79add-e6db-4dfd-8d7d-ecad3dc9473c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:44:38 GMT]] Body:0xc000875400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317040 TLS:<nil>}
I0926 22:44:38.219843   62795 retry.go:31] will retry after 1m9.868345175s: Temporary Error: unexpected response code: 503
I0926 22:45:48.091705   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95f1b304-e550-49be-a0bb-433574d2df52] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:45:48 GMT]] Body:0xc000874840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152000 TLS:<nil>}
I0926 22:45:48.091822   62795 retry.go:31] will retry after 31.546081151s: Temporary Error: unexpected response code: 503
I0926 22:46:19.643161   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90145cfd-67d3-409e-8e38-803a3b9a7ccc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:46:19 GMT]] Body:0xc0016ac080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317180 TLS:<nil>}
I0926 22:46:19.643242   62795 retry.go:31] will retry after 49.309824827s: Temporary Error: unexpected response code: 503
I0926 22:47:08.956883   62795 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94a10752-40ae-4eaf-b0a2-943d68db56fb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:47:08 GMT]] Body:0xc0004da0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000152140 TLS:<nil>}
I0926 22:47:08.956954   62795 retry.go:31] will retry after 40.938771966s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-459506
helpers_test.go:243: (dbg) docker inspect functional-459506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	        "Created": "2025-09-26T22:35:21.920836916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:35:21.951781694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hostname",
	        "HostsPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hosts",
	        "LogPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917-json.log",
	        "Name": "/functional-459506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-459506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-459506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	                "LowerDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-459506",
	                "Source": "/var/lib/docker/volumes/functional-459506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-459506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-459506",
	                "name.minikube.sigs.k8s.io": "functional-459506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb0f8342093a0b817dd54ab2bfc7283d5c3b97c478a905330b0fb0f03d232a34",
	            "SandboxKey": "/var/run/docker/netns/fb0f8342093a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-459506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:64:7a:80:ed:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1d72584285bd0f2762e93cd89eea0f410798a5f4c51ad294c42f4fa0b4247fe",
	                    "EndpointID": "d3c98e2363a4eab3bdc87cfbc565ff15bb3e69f484dbf18a36fe7e0d357135a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-459506",
	                        "d095d86ee54b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-459506 -n functional-459506
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs -n 25: (1.308043576s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-459506 image save kicbase/echo-server:functional-459506 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image rm kicbase/echo-server:functional-459506 --alsologtostderr                                                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image save --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ start          │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start          │ -p functional-459506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start          │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-459506 --alsologtostderr -v=1                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ service        │ functional-459506 service list                                                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service list -o json                                                                                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service --namespace=default --https --url hello-node                                                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ service        │ functional-459506 service hello-node --url --format={{.IP}}                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service hello-node --url                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format short --alsologtostderr                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format yaml --alsologtostderr                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ ssh            │ functional-459506 ssh pgrep buildkitd                                                                                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ image          │ functional-459506 image build -t localhost/my-image:functional-459506 testdata/build --alsologtostderr                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format json --alsologtostderr                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format table --alsologtostderr                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:42:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:42:46.896482   62711 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.896577   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896585   62711 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.896589   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896870   62711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.897298   62711 out.go:368] Setting JSON to false
	I0926 22:42:46.898165   62711 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.898235   62711 start.go:140] virtualization: kvm guest
	I0926 22:42:46.899971   62711 out.go:179] * [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.900984   62711 notify.go:220] Checking for updates...
	I0926 22:42:46.900989   62711 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.901968   62711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.902910   62711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.904148   62711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.905129   62711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.906068   62711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.907486   62711 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.908008   62711 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.930088   62711 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.930160   62711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.982478   62711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.973277108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.982575   62711 docker.go:318] overlay module found
	I0926 22:42:46.984004   62711 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:42:46.985050   62711 start.go:304] selected driver: docker
	I0926 22:42:46.985070   62711 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.985173   62711 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.986851   62711 out.go:203] 
	W0926 22:42:46.987810   62711 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:42:46.988796   62711 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca62526b2c327       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   fecae25f41ca9       busybox-mount
	cebf1f1ed6be1       6e38f40d628db       11 minutes ago      Running             storage-provisioner       2                   0f104490635a6       storage-provisioner
	e5a30b0760041       90550c43ad2bc       11 minutes ago      Running             kube-apiserver            0                   78751bafaf7a4       kube-apiserver-functional-459506
	8e603c814a88f       a0af72f2ec6d6       11 minutes ago      Running             kube-controller-manager   2                   9f72adfa3efa4       kube-controller-manager-functional-459506
	16663cf3fd5d1       5f1f5298c888d       11 minutes ago      Running             etcd                      1                   91a9c6f7a15e2       etcd-functional-459506
	6989a06c1aa04       a0af72f2ec6d6       11 minutes ago      Exited              kube-controller-manager   1                   9f72adfa3efa4       kube-controller-manager-functional-459506
	c894d70efe2fc       46169d968e920       11 minutes ago      Running             kube-scheduler            1                   0f4b676619c64       kube-scheduler-functional-459506
	a264dd8f5b4a2       df0860106674d       11 minutes ago      Running             kube-proxy                1                   546d39f814afe       kube-proxy-2wtsn
	8bd6c0af7c48b       409467f978b4a       11 minutes ago      Running             kindnet-cni               1                   1eaf123c6da9f       kindnet-l54kz
	4a47257142396       52546a367cc9e       11 minutes ago      Running             coredns                   1                   475dc21959dca       coredns-66bc5c9577-4vrmt
	903a74e2d7853       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       1                   0f104490635a6       storage-provisioner
	e40a4f9b16a60       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   475dc21959dca       coredns-66bc5c9577-4vrmt
	6f0081db32335       409467f978b4a       12 minutes ago      Exited              kindnet-cni               0                   1eaf123c6da9f       kindnet-l54kz
	d99db3f0a539a       df0860106674d       12 minutes ago      Exited              kube-proxy                0                   546d39f814afe       kube-proxy-2wtsn
	bbe132d91cab0       46169d968e920       12 minutes ago      Exited              kube-scheduler            0                   0f4b676619c64       kube-scheduler-functional-459506
	15228ae0744fa       5f1f5298c888d       12 minutes ago      Exited              etcd                      0                   91a9c6f7a15e2       etcd-functional-459506
	
	
	==> containerd <==
	Sep 26 22:44:41 functional-459506 containerd[3896]: time="2025-09-26T22:44:41.949310135Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:42 functional-459506 containerd[3896]: time="2025-09-26T22:44:42.537639249Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:44 functional-459506 containerd[3896]: time="2025-09-26T22:44:44.181352998Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:44:44 functional-459506 containerd[3896]: time="2025-09-26T22:44:44.181390517Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Sep 26 22:45:53 functional-459506 containerd[3896]: time="2025-09-26T22:45:53.948401894Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:45:53 functional-459506 containerd[3896]: time="2025-09-26T22:45:53.950305773Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:45:54 functional-459506 containerd[3896]: time="2025-09-26T22:45:54.539937797Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:45:56 functional-459506 containerd[3896]: time="2025-09-26T22:45:56.553132808Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:45:56 functional-459506 containerd[3896]: time="2025-09-26T22:45:56.553214233Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=12711"
	Sep 26 22:46:00 functional-459506 containerd[3896]: time="2025-09-26T22:46:00.951158207Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 22:46:00 functional-459506 containerd[3896]: time="2025-09-26T22:46:00.952670005Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:01 functional-459506 containerd[3896]: time="2025-09-26T22:46:01.534842031Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:03 functional-459506 containerd[3896]: time="2025-09-26T22:46:03.187721892Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:46:03 functional-459506 containerd[3896]: time="2025-09-26T22:46:03.187779049Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 26 22:46:09 functional-459506 containerd[3896]: time="2025-09-26T22:46:09.947399005Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 26 22:46:09 functional-459506 containerd[3896]: time="2025-09-26T22:46:09.949044888Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:10 functional-459506 containerd[3896]: time="2025-09-26T22:46:10.540196740Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:12 functional-459506 containerd[3896]: time="2025-09-26T22:46:12.177146732Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:46:12 functional-459506 containerd[3896]: time="2025-09-26T22:46:12.177209329Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10966"
	Sep 26 22:47:08 functional-459506 containerd[3896]: time="2025-09-26T22:47:08.958162649Z" level=info msg="shim disconnected" id=21g47fblvggy8ztbc3wpz61x8 namespace=k8s.io
	Sep 26 22:47:08 functional-459506 containerd[3896]: time="2025-09-26T22:47:08.958203468Z" level=warning msg="cleaning up after shim disconnected" id=21g47fblvggy8ztbc3wpz61x8 namespace=k8s.io
	Sep 26 22:47:08 functional-459506 containerd[3896]: time="2025-09-26T22:47:08.958218606Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 26 22:47:09 functional-459506 containerd[3896]: time="2025-09-26T22:47:09.070959575Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-459506\""
	Sep 26 22:47:09 functional-459506 containerd[3896]: time="2025-09-26T22:47:09.074108627Z" level=info msg="ImageCreate event name:\"sha256:1c2cf4418cd5d8303a9e28acfcddecbc763b2e4d037f11cbeffc30c9ed240b2d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 26 22:47:09 functional-459506 containerd[3896]: time="2025-09-26T22:47:09.074649069Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-459506\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> coredns [4a47257142396d0a917fecabd4ae47f729eb1ab3570ffb7517ff9f5248fd93df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48290 - 21129 "HINFO IN 8280138097893442510.5169536380750645255. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023990945s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e40a4f9b16a6001c5ae0925a33fdc6dedeeb89585171a66821936c02876500f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51300 - 29788 "HINFO IN 4563362523290822031.8774679367264300029. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069789178s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-459506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-459506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-459506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-459506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:47:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:47:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:47:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:47:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:47:38 +0000   Fri, 26 Sep 2025 22:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-459506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 05e4574455ab4b559c781aee570b04b3
	  System UUID:                d46c27bc-3376-49b5-80bd-4cdd4f761af8
	  Boot ID:                    d6777c8b-c717-4851-a50e-a884fc659348
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c4qtx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-g9scz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-cv8kj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m39s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-4vrmt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-459506                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-l54kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-459506              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-459506     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2wtsn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-459506              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5xhv2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-59n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	
	
	==> dmesg <==
	[Sep26 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001877] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.387443] i8042: Warning: Keylock active
	[  +0.011484] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004689] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000998] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000986] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001004] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001049] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001043] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448971] block sda: the capability attribute has been deprecated.
	[  +0.076726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021403] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.907524] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15228ae0744fa3d8d71e9ed9acb7601ebe23cd47d92475f3358c2b085a409570] <==
	{"level":"warn","ts":"2025-09-26T22:35:32.476089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.482777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.488658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.494335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.509868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.515650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.567513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:36:32.055542Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:36:32.055623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:36:32.055736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057441Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.057539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:36:32.058027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:36:32.058019Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:36:32.057993Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058223Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:36:32.058230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:36:32.058236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.059913Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:36:32.059967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.060006Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:36:32.060044Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [16663cf3fd5d10b83679013fbc8fc1c36cf64834b3eae54f2ef5c88da055361c] <==
	{"level":"warn","ts":"2025-09-26T22:36:36.402507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.408943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.415035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.422084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.429205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.435131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.441633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.448848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.454809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.462310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.475036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.480895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.487563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.493602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.499669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.505495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.512230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.519707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.529837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.536990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.543375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.594597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:36.125492Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-09-26T22:46:36.144507Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"18.66727ms","hash":1726823260,"current-db-size-bytes":3829760,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1908736,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-26T22:46:36.144558Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1726823260,"revision":1071,"compact-revision":-1}
	
	
	==> kernel <==
	 22:47:48 up 30 min,  0 users,  load average: 0.10, 0.23, 0.38
	Linux functional-459506 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f0081db3233525107e5885f7a265bdd7fc9f0e70cd992771d9aaa4ca5682337] <==
	I0926 22:35:41.942119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:35:41.942327       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:35:41.942436       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:35:41.942455       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:35:41.942472       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:35:42.141703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:35:42.141777       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:35:42.141794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:35:42.142354       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0926 22:35:42.541865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:35:42.541885       1 metrics.go:72] Registering metrics
	I0926 22:35:42.541946       1 controller.go:711] "Syncing nftables rules"
	I0926 22:35:52.143620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:52.143696       1 main.go:301] handling current node
	I0926 22:36:02.146896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:02.146939       1 main.go:301] handling current node
	I0926 22:36:12.150828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:12.150867       1 main.go:301] handling current node
	
	
	==> kindnet [8bd6c0af7c48b340de1bf3a68946c513cc533581ddd4d6b0e4bf351239517410] <==
	I0926 22:45:42.891353       1 main.go:301] handling current node
	I0926 22:45:52.891630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:52.891665       1 main.go:301] handling current node
	I0926 22:46:02.897806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:02.897837       1 main.go:301] handling current node
	I0926 22:46:12.891354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:12.891389       1 main.go:301] handling current node
	I0926 22:46:22.891694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:22.891726       1 main.go:301] handling current node
	I0926 22:46:32.890693       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:32.890741       1 main.go:301] handling current node
	I0926 22:46:42.891174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:42.891223       1 main.go:301] handling current node
	I0926 22:46:52.895095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:52.895126       1 main.go:301] handling current node
	I0926 22:47:02.898848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:02.898884       1 main.go:301] handling current node
	I0926 22:47:12.891715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:12.891745       1 main.go:301] handling current node
	I0926 22:47:22.891602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:22.891636       1 main.go:301] handling current node
	I0926 22:47:32.899011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:32.899048       1 main.go:301] handling current node
	I0926 22:47:42.891545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:42.891604       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5a30b07600415b080587a2a6d1ea08b2055828357a99617f952c06563d727e2] <==
	I0926 22:37:00.548826       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.39.241"}
	I0926 22:37:01.255927       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.193.82"}
	I0926 22:37:01.977342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.187.36"}
	I0926 22:37:36.883256       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:42.252049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:40.206792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:01.111456       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:58.542426       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:18.473793       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:05.376069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:39.665227       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:31.940639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:47.816380       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:42:47.904783       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.131.25"}
	I0926 22:42:47.920679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.240"}
	I0926 22:42:56.161937       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:09.909055       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.136"}
	I0926 22:43:44.613318       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:04.440996       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:46.386965       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:15.977328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:10.613817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:36.975011       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:46:44.738140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:39.547504       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6989a06c1aa044081666ea274870f6b2f62081f15fddafd098ceec849ef63965] <==
	I0926 22:36:23.268600       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:36:23.628583       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:36:23.628607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:23.630025       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:36:23.630038       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:36:23.630385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:36:23.630414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:36:33.632748       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8e603c814a88fbfef59bb33f84ea361bd131e385ab2a4d76cc74bde2bcfaea0d] <==
	I0926 22:36:40.364466       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:36:40.364501       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:36:40.364514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:36:40.364568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:36:40.364622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:36:40.364632       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:36:40.364686       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:36:40.364719       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:36:40.364730       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0926 22:36:40.364834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-459506"
	I0926 22:36:40.364892       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:36:40.366981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:36:40.370586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370617       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370637       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:36:40.372783       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:36:40.375018       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:36:40.377335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:36:40.385601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:42:47.862065       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.865843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.866048       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.868815       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.870826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.874609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a264dd8f5b4a2942f0efee0b51ce7ed0adb4b1ad43db0f5b5f0c22c0ba88de78] <==
	I0926 22:36:22.558358       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:36:22.559394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:23.538404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:26.410869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:31.532055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:36:38.959447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:36:38.959487       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:36:38.959582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:36:38.986993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:36:38.987069       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:36:38.994049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:36:38.994605       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:36:38.994630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:38.997310       1 config.go:200] "Starting service config controller"
	I0926 22:36:38.997330       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:36:38.997362       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:36:38.997368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:36:38.997424       1 config.go:309] "Starting node config controller"
	I0926 22:36:38.997430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:36:38.997436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:36:38.997657       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:36:38.997669       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:36:39.097677       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:36:39.097747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:36:39.098062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d99db3f0a539a19d9cf4e02c8429489ff255a6c5d2fe9f2573700d0ce0397f8f] <==
	I0926 22:35:41.509205       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:35:41.575220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:35:41.675605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:35:41.675637       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:35:41.675771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:35:41.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:35:41.699490       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:35:41.705720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:35:41.706093       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:35:41.706127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:35:41.707545       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:35:41.707554       1 config.go:200] "Starting service config controller"
	I0926 22:35:41.707573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:35:41.707594       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:35:41.707612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:35:41.707575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:35:41.707672       1 config.go:309] "Starting node config controller"
	I0926 22:35:41.707679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:35:41.707684       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:35:41.807791       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:35:41.807805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:35:41.807837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe132d91cab00583cfbee8fc0b2b826f5d89380f0d1522dccdf84bc4002a864] <==
	E0926 22:35:32.972891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:35:32.972938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:35:32.972966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:35:32.972988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:32.973074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:35:32.973076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:32.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:35:32.973193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:35:32.973192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:35:32.973179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:35:33.793455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:33.799444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:35:33.877548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:35:33.893413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:35:33.999974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:35:34.069240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:35:34.105348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:35:34.130498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:34.140448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:35:34.470155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883098       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:36:21.883227       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:36:21.883331       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:36:21.883366       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c894d70efe2fc6d275b679dc3901194c6f6800fe43d0055daf8fb4de89bdf15a] <==
	E0926 22:36:28.212606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:28.310457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:28.412275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:28.443003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:28.534103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:31.138080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:31.354330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:36:31.367786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:31.510528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:31.521081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:31.837947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:32.252990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:32.286651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:32.320204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:32.616030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:32.939676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:33.405067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:33.435786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:33.459236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:33.593227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:33.755685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:36:34.225507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:34.380598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:34.435490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0926 22:36:46.721125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:46:58 functional-459506 kubelet[4881]: E0926 22:46:58.946939    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:47:00 functional-459506 kubelet[4881]: E0926 22:47:00.950242    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:47:05 functional-459506 kubelet[4881]: E0926 22:47:05.947573    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:47:06 functional-459506 kubelet[4881]: E0926 22:47:06.949955    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:47:06 functional-459506 kubelet[4881]: E0926 22:47:06.950499    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:47:08 functional-459506 kubelet[4881]: E0926 22:47:08.947841    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:47:10 functional-459506 kubelet[4881]: E0926 22:47:10.946535    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:47:10 functional-459506 kubelet[4881]: E0926 22:47:10.946576    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:47:12 functional-459506 kubelet[4881]: E0926 22:47:12.947542    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:47:19 functional-459506 kubelet[4881]: E0926 22:47:19.947857    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:47:19 functional-459506 kubelet[4881]: E0926 22:47:19.947866    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:47:20 functional-459506 kubelet[4881]: E0926 22:47:20.946710    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:47:20 functional-459506 kubelet[4881]: E0926 22:47:20.947343    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:47:21 functional-459506 kubelet[4881]: E0926 22:47:21.946443    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:47:25 functional-459506 kubelet[4881]: E0926 22:47:25.946829    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:47:27 functional-459506 kubelet[4881]: E0926 22:47:27.947942    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:47:30 functional-459506 kubelet[4881]: E0926 22:47:30.948080    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:47:31 functional-459506 kubelet[4881]: E0926 22:47:31.947636    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:47:33 functional-459506 kubelet[4881]: E0926 22:47:33.947740    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:47:34 functional-459506 kubelet[4881]: E0926 22:47:34.947684    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:47:34 functional-459506 kubelet[4881]: E0926 22:47:34.947782    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:47:38 functional-459506 kubelet[4881]: E0926 22:47:38.947806    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:47:40 functional-459506 kubelet[4881]: E0926 22:47:40.946496    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:47:42 functional-459506 kubelet[4881]: E0926 22:47:42.947882    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:47:45 functional-459506 kubelet[4881]: E0926 22:47:45.950366    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	
	
	==> storage-provisioner [903a74e2d785332eef5dd63e71cab7027811128118514bd84afbc9721ac5c416] <==
	I0926 22:36:12.358555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:36:12.365148       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:36:12.365186       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:36:12.367487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.373103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.373284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:36:12.373437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	I0926 22:36:12.373420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcaf19cb-0770-4ca7-b54d-720d909e89f2", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7 became leader
	W0926 22:36:12.375205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.377966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.474582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	W0926 22:36:14.382126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:14.388844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.392242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.396206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cebf1f1ed6be19b56dc23481a5410552eccab7653863a9a3e2d0911b4bdc8aa3] <==
	W0926 22:47:23.783894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:25.786974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:25.790621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:27.793484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:27.796981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:29.799540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:29.803398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:31.805800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:31.810125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:33.812916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:33.816545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:35.819170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:35.822496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:37.825861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:37.829859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:39.832242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:39.835734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:41.838827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:41.842736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:43.845800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:43.850198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:45.852640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:45.856234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:47.859160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:47.863035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
helpers_test.go:269: (dbg) Run:  kubectl --context functional-459506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1 (91.96596ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:42:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ca62526b2c327497c75dc175ee6636f9d7c65b49b65c963619f5f8b5205b4a44
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:42:33 +0000
	      Finished:     Fri, 26 Sep 2025 22:42:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksn8n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ksn8n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m19s  default-scheduler  Successfully assigned default/busybox-mount to functional-459506
	  Normal  Pulling    5m18s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m16s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.295s (2.295s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m16s  kubelet            Created container: mount-munger
	  Normal  Started    5m16s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-c4qtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p27jz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p27jz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c4qtx to functional-459506
	  Warning  Failed     9m21s (x3 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m1s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m58s (x2 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m58s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    43s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     43s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-g9scz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn97f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zn97f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-g9scz to functional-459506
	  Normal   Pulling    7m57s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m54s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m54s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     39s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-cv8kj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:43:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549ls (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-549ls:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m40s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-cv8kj to functional-459506
	  Normal   Pulling    100s (x5 over 4m39s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     97s (x5 over 4m37s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     97s (x5 over 4m37s)   kubelet            Error: ErrImagePull
	  Warning  Failed     30s (x15 over 4m37s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x17 over 4m37s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk7pr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk7pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-459506
	  Normal   Pulling    7m33s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m30s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m30s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    41s (x41 over 10m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     41s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:07 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv4kq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv4kq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-459506
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m46s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m43s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m43s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    39s (x41 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     39s (x41 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5xhv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-59n29" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1
E0926 22:51:57.765976   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-459506 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-459506 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-g9scz" [3352791e-ffd2-43f2-a616-6553c6db8a5f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0926 22:37:02.897606   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-26 22:47:02.264372849 +0000 UTC m=+1086.519579927
functional_test.go:1645: (dbg) Run:  kubectl --context functional-459506 describe po hello-node-connect-7d85dfc575-g9scz -n default
functional_test.go:1645: (dbg) kubectl --context functional-459506 describe po hello-node-connect-7d85dfc575-g9scz -n default:
Name:             hello-node-connect-7d85dfc575-g9scz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn97f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zn97f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-g9scz to functional-459506
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m55s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m7s (x5 over 9m55s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-459506 logs hello-node-connect-7d85dfc575-g9scz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-459506 logs hello-node-connect-7d85dfc575-g9scz -n default: exit status 1 (55.858035ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-g9scz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-459506 logs hello-node-connect-7d85dfc575-g9scz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-459506 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-g9scz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn97f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zn97f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-g9scz to functional-459506
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m55s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m7s (x5 over 9m55s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-459506 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-459506 logs -l app=hello-node-connect: exit status 1 (58.615582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-g9scz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-459506 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-459506 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.187.36
IPs:                      10.107.187.36
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32169/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-459506
helpers_test.go:243: (dbg) docker inspect functional-459506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	        "Created": "2025-09-26T22:35:21.920836916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:35:21.951781694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hostname",
	        "HostsPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hosts",
	        "LogPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917-json.log",
	        "Name": "/functional-459506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-459506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-459506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	                "LowerDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-459506",
	                "Source": "/var/lib/docker/volumes/functional-459506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-459506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-459506",
	                "name.minikube.sigs.k8s.io": "functional-459506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb0f8342093a0b817dd54ab2bfc7283d5c3b97c478a905330b0fb0f03d232a34",
	            "SandboxKey": "/var/run/docker/netns/fb0f8342093a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-459506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:64:7a:80:ed:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1d72584285bd0f2762e93cd89eea0f410798a5f4c51ad294c42f4fa0b4247fe",
	                    "EndpointID": "d3c98e2363a4eab3bdc87cfbc565ff15bb3e69f484dbf18a36fe7e0d357135a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-459506",
	                        "d095d86ee54b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-459506 -n functional-459506
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs -n 25: (1.313923962s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount1 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount2 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount3 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ ssh       │ functional-459506 ssh findmnt -T /mount1                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ ssh       │ functional-459506 ssh findmnt -T /mount2                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ ssh       │ functional-459506 ssh findmnt -T /mount3                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ mount     │ -p functional-459506 --kill=true                                                                                                                                      │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image save kicbase/echo-server:functional-459506 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image rm kicbase/echo-server:functional-459506 --alsologtostderr                                                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image save --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ start     │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start     │ -p functional-459506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start     │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-459506 --alsologtostderr -v=1                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ service   │ functional-459506 service list                                                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service   │ functional-459506 service list -o json                                                                                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:42:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:42:46.896482   62711 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.896577   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896585   62711 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.896589   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896870   62711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.897298   62711 out.go:368] Setting JSON to false
	I0926 22:42:46.898165   62711 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.898235   62711 start.go:140] virtualization: kvm guest
	I0926 22:42:46.899971   62711 out.go:179] * [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.900984   62711 notify.go:220] Checking for updates...
	I0926 22:42:46.900989   62711 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.901968   62711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.902910   62711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.904148   62711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.905129   62711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.906068   62711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.907486   62711 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.908008   62711 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.930088   62711 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.930160   62711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.982478   62711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.973277108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.982575   62711 docker.go:318] overlay module found
	I0926 22:42:46.984004   62711 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:42:46.985050   62711 start.go:304] selected driver: docker
	I0926 22:42:46.985070   62711 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.985173   62711 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.986851   62711 out.go:203] 
	W0926 22:42:46.987810   62711 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:42:46.988796   62711 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca62526b2c327       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   fecae25f41ca9       busybox-mount
	cebf1f1ed6be1       6e38f40d628db       10 minutes ago      Running             storage-provisioner       2                   0f104490635a6       storage-provisioner
	e5a30b0760041       90550c43ad2bc       10 minutes ago      Running             kube-apiserver            0                   78751bafaf7a4       kube-apiserver-functional-459506
	8e603c814a88f       a0af72f2ec6d6       10 minutes ago      Running             kube-controller-manager   2                   9f72adfa3efa4       kube-controller-manager-functional-459506
	16663cf3fd5d1       5f1f5298c888d       10 minutes ago      Running             etcd                      1                   91a9c6f7a15e2       etcd-functional-459506
	6989a06c1aa04       a0af72f2ec6d6       10 minutes ago      Exited              kube-controller-manager   1                   9f72adfa3efa4       kube-controller-manager-functional-459506
	c894d70efe2fc       46169d968e920       10 minutes ago      Running             kube-scheduler            1                   0f4b676619c64       kube-scheduler-functional-459506
	a264dd8f5b4a2       df0860106674d       10 minutes ago      Running             kube-proxy                1                   546d39f814afe       kube-proxy-2wtsn
	8bd6c0af7c48b       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   1eaf123c6da9f       kindnet-l54kz
	4a47257142396       52546a367cc9e       10 minutes ago      Running             coredns                   1                   475dc21959dca       coredns-66bc5c9577-4vrmt
	903a74e2d7853       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       1                   0f104490635a6       storage-provisioner
	e40a4f9b16a60       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   475dc21959dca       coredns-66bc5c9577-4vrmt
	6f0081db32335       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   1eaf123c6da9f       kindnet-l54kz
	d99db3f0a539a       df0860106674d       11 minutes ago      Exited              kube-proxy                0                   546d39f814afe       kube-proxy-2wtsn
	bbe132d91cab0       46169d968e920       11 minutes ago      Exited              kube-scheduler            0                   0f4b676619c64       kube-scheduler-functional-459506
	15228ae0744fa       5f1f5298c888d       11 minutes ago      Exited              etcd                      0                   91a9c6f7a15e2       etcd-functional-459506
	
	
	==> containerd <==
	Sep 26 22:44:31 functional-459506 containerd[3896]: time="2025-09-26T22:44:31.227085503Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:44:31 functional-459506 containerd[3896]: time="2025-09-26T22:44:31.228173757Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:31 functional-459506 containerd[3896]: time="2025-09-26T22:44:31.813057252Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:33 functional-459506 containerd[3896]: time="2025-09-26T22:44:33.461652463Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:44:33 functional-459506 containerd[3896]: time="2025-09-26T22:44:33.461716457Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 26 22:44:41 functional-459506 containerd[3896]: time="2025-09-26T22:44:41.947790126Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 26 22:44:41 functional-459506 containerd[3896]: time="2025-09-26T22:44:41.949310135Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:42 functional-459506 containerd[3896]: time="2025-09-26T22:44:42.537639249Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:44:44 functional-459506 containerd[3896]: time="2025-09-26T22:44:44.181352998Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:44:44 functional-459506 containerd[3896]: time="2025-09-26T22:44:44.181390517Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Sep 26 22:45:53 functional-459506 containerd[3896]: time="2025-09-26T22:45:53.948401894Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:45:53 functional-459506 containerd[3896]: time="2025-09-26T22:45:53.950305773Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:45:54 functional-459506 containerd[3896]: time="2025-09-26T22:45:54.539937797Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:45:56 functional-459506 containerd[3896]: time="2025-09-26T22:45:56.553132808Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:45:56 functional-459506 containerd[3896]: time="2025-09-26T22:45:56.553214233Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=12711"
	Sep 26 22:46:00 functional-459506 containerd[3896]: time="2025-09-26T22:46:00.951158207Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 22:46:00 functional-459506 containerd[3896]: time="2025-09-26T22:46:00.952670005Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:01 functional-459506 containerd[3896]: time="2025-09-26T22:46:01.534842031Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:03 functional-459506 containerd[3896]: time="2025-09-26T22:46:03.187721892Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:46:03 functional-459506 containerd[3896]: time="2025-09-26T22:46:03.187779049Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 26 22:46:09 functional-459506 containerd[3896]: time="2025-09-26T22:46:09.947399005Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 26 22:46:09 functional-459506 containerd[3896]: time="2025-09-26T22:46:09.949044888Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:10 functional-459506 containerd[3896]: time="2025-09-26T22:46:10.540196740Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:46:12 functional-459506 containerd[3896]: time="2025-09-26T22:46:12.177146732Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:46:12 functional-459506 containerd[3896]: time="2025-09-26T22:46:12.177209329Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10966"
	
	
	==> coredns [4a47257142396d0a917fecabd4ae47f729eb1ab3570ffb7517ff9f5248fd93df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48290 - 21129 "HINFO IN 8280138097893442510.5169536380750645255. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023990945s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e40a4f9b16a6001c5ae0925a33fdc6dedeeb89585171a66821936c02876500f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51300 - 29788 "HINFO IN 4563362523290822031.8774679367264300029. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069789178s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-459506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-459506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-459506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-459506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:46:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:46:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:46:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:46:38 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:46:38 +0000   Fri, 26 Sep 2025 22:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-459506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 05e4574455ab4b559c781aee570b04b3
	  System UUID:                d46c27bc-3376-49b5-80bd-4cdd4f761af8
	  Boot ID:                    d6777c8b-c717-4851-a50e-a884fc659348
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c4qtx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-g9scz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-cv8kj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m54s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-4vrmt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-459506                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-l54kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-459506              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-459506     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2wtsn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-459506              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5xhv2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-59n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	
	
	==> dmesg <==
	[Sep26 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001877] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.387443] i8042: Warning: Keylock active
	[  +0.011484] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004689] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000998] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000986] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001004] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001049] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001043] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448971] block sda: the capability attribute has been deprecated.
	[  +0.076726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021403] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.907524] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15228ae0744fa3d8d71e9ed9acb7601ebe23cd47d92475f3358c2b085a409570] <==
	{"level":"warn","ts":"2025-09-26T22:35:32.476089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.482777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.488658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.494335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.509868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.515650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.567513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:36:32.055542Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:36:32.055623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:36:32.055736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057441Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.057539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:36:32.058027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:36:32.058019Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:36:32.057993Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058223Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:36:32.058230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:36:32.058236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.059913Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:36:32.059967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.060006Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:36:32.060044Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [16663cf3fd5d10b83679013fbc8fc1c36cf64834b3eae54f2ef5c88da055361c] <==
	{"level":"warn","ts":"2025-09-26T22:36:36.402507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.408943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.415035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.422084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.429205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.435131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.441633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.448848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.454809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.462310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.475036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.480895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.487563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.493602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.499669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.505495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.512230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.519707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.529837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.536990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.543375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.594597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:36.125492Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-09-26T22:46:36.144507Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"18.66727ms","hash":1726823260,"current-db-size-bytes":3829760,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1908736,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-26T22:46:36.144558Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1726823260,"revision":1071,"compact-revision":-1}
	
	
	==> kernel <==
	 22:47:03 up 29 min,  0 users,  load average: 0.13, 0.25, 0.40
	Linux functional-459506 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f0081db3233525107e5885f7a265bdd7fc9f0e70cd992771d9aaa4ca5682337] <==
	I0926 22:35:41.942119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:35:41.942327       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:35:41.942436       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:35:41.942455       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:35:41.942472       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:35:42.141703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:35:42.141777       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:35:42.141794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:35:42.142354       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0926 22:35:42.541865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:35:42.541885       1 metrics.go:72] Registering metrics
	I0926 22:35:42.541946       1 controller.go:711] "Syncing nftables rules"
	I0926 22:35:52.143620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:52.143696       1 main.go:301] handling current node
	I0926 22:36:02.146896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:02.146939       1 main.go:301] handling current node
	I0926 22:36:12.150828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:12.150867       1 main.go:301] handling current node
	
	
	==> kindnet [8bd6c0af7c48b340de1bf3a68946c513cc533581ddd4d6b0e4bf351239517410] <==
	I0926 22:45:02.892370       1 main.go:301] handling current node
	I0926 22:45:12.892088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:12.892125       1 main.go:301] handling current node
	I0926 22:45:22.891384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:22.891416       1 main.go:301] handling current node
	I0926 22:45:32.891564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:32.891604       1 main.go:301] handling current node
	I0926 22:45:42.891309       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:42.891353       1 main.go:301] handling current node
	I0926 22:45:52.891630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:45:52.891665       1 main.go:301] handling current node
	I0926 22:46:02.897806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:02.897837       1 main.go:301] handling current node
	I0926 22:46:12.891354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:12.891389       1 main.go:301] handling current node
	I0926 22:46:22.891694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:22.891726       1 main.go:301] handling current node
	I0926 22:46:32.890693       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:32.890741       1 main.go:301] handling current node
	I0926 22:46:42.891174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:42.891223       1 main.go:301] handling current node
	I0926 22:46:52.895095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:52.895126       1 main.go:301] handling current node
	I0926 22:47:02.898848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:02.898884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5a30b07600415b080587a2a6d1ea08b2055828357a99617f952c06563d727e2] <==
	I0926 22:36:55.972979       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.201.64"}
	I0926 22:37:00.548826       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.39.241"}
	I0926 22:37:01.255927       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.193.82"}
	I0926 22:37:01.977342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.187.36"}
	I0926 22:37:36.883256       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:42.252049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:40.206792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:01.111456       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:58.542426       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:18.473793       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:05.376069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:39.665227       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:31.940639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:47.816380       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:42:47.904783       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.131.25"}
	I0926 22:42:47.920679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.240"}
	I0926 22:42:56.161937       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:09.909055       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.136"}
	I0926 22:43:44.613318       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:04.440996       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:46.386965       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:15.977328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:10.613817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:36.975011       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:46:44.738140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6989a06c1aa044081666ea274870f6b2f62081f15fddafd098ceec849ef63965] <==
	I0926 22:36:23.268600       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:36:23.628583       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:36:23.628607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:23.630025       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:36:23.630038       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:36:23.630385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:36:23.630414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:36:33.632748       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8e603c814a88fbfef59bb33f84ea361bd131e385ab2a4d76cc74bde2bcfaea0d] <==
	I0926 22:36:40.364466       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:36:40.364501       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:36:40.364514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:36:40.364568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:36:40.364622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:36:40.364632       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:36:40.364686       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:36:40.364719       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:36:40.364730       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0926 22:36:40.364834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-459506"
	I0926 22:36:40.364892       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:36:40.366981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:36:40.370586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370617       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370637       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:36:40.372783       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:36:40.375018       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:36:40.377335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:36:40.385601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:42:47.862065       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.865843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.866048       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.868815       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.870826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.874609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a264dd8f5b4a2942f0efee0b51ce7ed0adb4b1ad43db0f5b5f0c22c0ba88de78] <==
	I0926 22:36:22.558358       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:36:22.559394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:23.538404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:26.410869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:31.532055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:36:38.959447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:36:38.959487       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:36:38.959582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:36:38.986993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:36:38.987069       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:36:38.994049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:36:38.994605       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:36:38.994630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:38.997310       1 config.go:200] "Starting service config controller"
	I0926 22:36:38.997330       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:36:38.997362       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:36:38.997368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:36:38.997424       1 config.go:309] "Starting node config controller"
	I0926 22:36:38.997430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:36:38.997436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:36:38.997657       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:36:38.997669       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:36:39.097677       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:36:39.097747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:36:39.098062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d99db3f0a539a19d9cf4e02c8429489ff255a6c5d2fe9f2573700d0ce0397f8f] <==
	I0926 22:35:41.509205       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:35:41.575220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:35:41.675605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:35:41.675637       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:35:41.675771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:35:41.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:35:41.699490       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:35:41.705720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:35:41.706093       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:35:41.706127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:35:41.707545       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:35:41.707554       1 config.go:200] "Starting service config controller"
	I0926 22:35:41.707573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:35:41.707594       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:35:41.707612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:35:41.707575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:35:41.707672       1 config.go:309] "Starting node config controller"
	I0926 22:35:41.707679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:35:41.707684       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:35:41.807791       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:35:41.807805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:35:41.807837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe132d91cab00583cfbee8fc0b2b826f5d89380f0d1522dccdf84bc4002a864] <==
	E0926 22:35:32.972891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:35:32.972938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:35:32.972966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:35:32.972988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:32.973074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:35:32.973076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:32.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:35:32.973193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:35:32.973192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:35:32.973179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:35:33.793455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:33.799444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:35:33.877548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:35:33.893413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:35:33.999974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:35:34.069240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:35:34.105348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:35:34.130498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:34.140448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:35:34.470155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883098       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:36:21.883227       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:36:21.883331       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:36:21.883366       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c894d70efe2fc6d275b679dc3901194c6f6800fe43d0055daf8fb4de89bdf15a] <==
	E0926 22:36:28.212606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:28.310457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:28.412275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:28.443003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:28.534103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:31.138080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:31.354330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:36:31.367786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:31.510528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:31.521081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:31.837947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:32.252990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:32.286651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:32.320204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:32.616030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:32.939676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:33.405067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:33.435786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:33.459236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:33.593227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:33.755685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:36:34.225507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:34.380598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:34.435490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0926 22:36:46.721125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:46:17 functional-459506 kubelet[4881]: E0926 22:46:17.948169    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:46:19 functional-459506 kubelet[4881]: E0926 22:46:19.947273    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:46:19 functional-459506 kubelet[4881]: E0926 22:46:19.948046    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:46:21 functional-459506 kubelet[4881]: E0926 22:46:21.947501    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:46:25 functional-459506 kubelet[4881]: E0926 22:46:25.946471    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:46:25 functional-459506 kubelet[4881]: E0926 22:46:25.947244    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:46:26 functional-459506 kubelet[4881]: E0926 22:46:26.947783    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:46:30 functional-459506 kubelet[4881]: E0926 22:46:30.947384    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:46:30 functional-459506 kubelet[4881]: E0926 22:46:30.948064    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:46:32 functional-459506 kubelet[4881]: E0926 22:46:32.947051    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:46:32 functional-459506 kubelet[4881]: E0926 22:46:32.947635    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:46:37 functional-459506 kubelet[4881]: E0926 22:46:37.947555    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:46:39 functional-459506 kubelet[4881]: E0926 22:46:39.947834    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:46:39 functional-459506 kubelet[4881]: E0926 22:46:39.948400    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:46:41 functional-459506 kubelet[4881]: E0926 22:46:41.947216    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:46:45 functional-459506 kubelet[4881]: E0926 22:46:45.947926    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:46:46 functional-459506 kubelet[4881]: E0926 22:46:46.946864    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:46:46 functional-459506 kubelet[4881]: E0926 22:46:46.947463    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:46:51 functional-459506 kubelet[4881]: E0926 22:46:51.947304    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:46:52 functional-459506 kubelet[4881]: E0926 22:46:52.947719    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:46:54 functional-459506 kubelet[4881]: E0926 22:46:54.948548    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:46:55 functional-459506 kubelet[4881]: E0926 22:46:55.947492    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:46:56 functional-459506 kubelet[4881]: E0926 22:46:56.947249    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:46:58 functional-459506 kubelet[4881]: E0926 22:46:58.946939    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:47:00 functional-459506 kubelet[4881]: E0926 22:47:00.950242    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	
	
	==> storage-provisioner [903a74e2d785332eef5dd63e71cab7027811128118514bd84afbc9721ac5c416] <==
	I0926 22:36:12.358555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:36:12.365148       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:36:12.365186       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:36:12.367487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.373103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.373284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:36:12.373437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	I0926 22:36:12.373420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcaf19cb-0770-4ca7-b54d-720d909e89f2", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7 became leader
	W0926 22:36:12.375205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.377966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.474582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	W0926 22:36:14.382126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:14.388844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.392242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.396206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cebf1f1ed6be19b56dc23481a5410552eccab7653863a9a3e2d0911b4bdc8aa3] <==
	W0926 22:46:39.636858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:41.640068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:41.645196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:43.648555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:43.652142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:45.654690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:45.658204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:47.660900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:47.664248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:49.666602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:49.670107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:51.672440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:51.676687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:53.679481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:53.683802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:55.686276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:55.689720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:57.692328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:57.696791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:59.699180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:59.702986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:01.706301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:01.709880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:03.713271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:47:03.716801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
helpers_test.go:269: (dbg) Run:  kubectl --context functional-459506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1 (98.74311ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:42:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ca62526b2c327497c75dc175ee6636f9d7c65b49b65c963619f5f8b5205b4a44
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:42:33 +0000
	      Finished:     Fri, 26 Sep 2025 22:42:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksn8n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ksn8n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m34s  default-scheduler  Successfully assigned default/busybox-mount to functional-459506
	  Normal  Pulling    4m33s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m31s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.295s (2.295s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m31s  kubelet            Created container: mount-munger
	  Normal  Started    4m31s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-c4qtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p27jz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p27jz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c4qtx to functional-459506
	  Warning  Failed     8m36s (x3 over 9m46s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m16s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x2 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m13s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-g9scz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn97f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zn97f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-g9scz to functional-459506
	  Normal   Pulling    7m12s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m9s (x5 over 9m57s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m55s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m42s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-cv8kj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:43:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549ls (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-549ls:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m55s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-cv8kj to functional-459506
	  Normal   Pulling    55s (x5 over 3m54s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     52s (x5 over 3m52s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x5 over 3m52s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x13 over 3m52s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     10s (x13 over 3m52s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk7pr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk7pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-459506
	  Normal   Pulling    6m48s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m45s (x5 over 9m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m45s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m49s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m37s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:07 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv4kq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv4kq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/sp-pod to functional-459506
	  Warning  Failed     9m54s                   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m1s (x5 over 9m57s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m58s (x5 over 9m54s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m58s (x4 over 9m37s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m49s (x19 over 9m53s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m23s (x21 over 9m53s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5xhv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-59n29" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.80s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2d348b21-e3b4-40bd-b5e7-01094db2de5d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00270989s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-459506 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-459506 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-459506 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-459506 apply -f testdata/storage-provisioner/pod.yaml
I0926 22:37:07.335492   13040 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b5494cea-410c-40a9-85da-5cc71c798527] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0926 22:37:08.019257   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:18.260887   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:38.742820   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:38:19.704519   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:39:41.625956   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-26 22:43:07.62205912 +0000 UTC m=+851.877266195
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-459506 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-459506 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:37:07 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv4kq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-zv4kq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-459506
Warning  Failed     5m57s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m4s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m1s (x5 over 5m57s)  kubelet            Error: ErrImagePull
Warning  Failed     3m1s (x4 over 5m40s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     52s (x19 over 5m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    26s (x21 over 5m56s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-459506 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-459506 logs sp-pod -n default: exit status 1 (55.210819ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-459506 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-459506
helpers_test.go:243: (dbg) docker inspect functional-459506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	        "Created": "2025-09-26T22:35:21.920836916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:35:21.951781694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hostname",
	        "HostsPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hosts",
	        "LogPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917-json.log",
	        "Name": "/functional-459506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-459506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-459506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	                "LowerDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-459506",
	                "Source": "/var/lib/docker/volumes/functional-459506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-459506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-459506",
	                "name.minikube.sigs.k8s.io": "functional-459506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb0f8342093a0b817dd54ab2bfc7283d5c3b97c478a905330b0fb0f03d232a34",
	            "SandboxKey": "/var/run/docker/netns/fb0f8342093a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-459506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:64:7a:80:ed:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1d72584285bd0f2762e93cd89eea0f410798a5f4c51ad294c42f4fa0b4247fe",
	                    "EndpointID": "d3c98e2363a4eab3bdc87cfbc565ff15bb3e69f484dbf18a36fe7e0d357135a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-459506",
	                        "d095d86ee54b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-459506 -n functional-459506
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs -n 25: (1.310668936s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-459506 ssh sudo umount -f /mount-9p                                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ ssh       │ functional-459506 ssh findmnt -T /mount1                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount1 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount2 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ mount     │ -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount3 --alsologtostderr -v=1                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ ssh       │ functional-459506 ssh findmnt -T /mount1                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ ssh       │ functional-459506 ssh findmnt -T /mount2                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ ssh       │ functional-459506 ssh findmnt -T /mount3                                                                                                                              │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ mount     │ -p functional-459506 --kill=true                                                                                                                                      │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image save kicbase/echo-server:functional-459506 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image rm kicbase/echo-server:functional-459506 --alsologtostderr                                                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image     │ functional-459506 image save --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ start     │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start     │ -p functional-459506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start     │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-459506 --alsologtostderr -v=1                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:42:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:42:46.896482   62711 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.896577   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896585   62711 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.896589   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896870   62711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.897298   62711 out.go:368] Setting JSON to false
	I0926 22:42:46.898165   62711 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.898235   62711 start.go:140] virtualization: kvm guest
	I0926 22:42:46.899971   62711 out.go:179] * [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.900984   62711 notify.go:220] Checking for updates...
	I0926 22:42:46.900989   62711 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.901968   62711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.902910   62711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.904148   62711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.905129   62711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.906068   62711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.907486   62711 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.908008   62711 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.930088   62711 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.930160   62711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.982478   62711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.973277108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.982575   62711 docker.go:318] overlay module found
	I0926 22:42:46.984004   62711 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:42:46.985050   62711 start.go:304] selected driver: docker
	I0926 22:42:46.985070   62711 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.985173   62711 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.986851   62711 out.go:203] 
	W0926 22:42:46.987810   62711 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:42:46.988796   62711 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca62526b2c327       56cc512116c8f       35 seconds ago      Exited              mount-munger              0                   fecae25f41ca9       busybox-mount
	cebf1f1ed6be1       6e38f40d628db       6 minutes ago       Running             storage-provisioner       2                   0f104490635a6       storage-provisioner
	e5a30b0760041       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            0                   78751bafaf7a4       kube-apiserver-functional-459506
	8e603c814a88f       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   2                   9f72adfa3efa4       kube-controller-manager-functional-459506
	16663cf3fd5d1       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   91a9c6f7a15e2       etcd-functional-459506
	6989a06c1aa04       a0af72f2ec6d6       6 minutes ago       Exited              kube-controller-manager   1                   9f72adfa3efa4       kube-controller-manager-functional-459506
	c894d70efe2fc       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   0f4b676619c64       kube-scheduler-functional-459506
	a264dd8f5b4a2       df0860106674d       6 minutes ago       Running             kube-proxy                1                   546d39f814afe       kube-proxy-2wtsn
	8bd6c0af7c48b       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   1eaf123c6da9f       kindnet-l54kz
	4a47257142396       52546a367cc9e       6 minutes ago       Running             coredns                   1                   475dc21959dca       coredns-66bc5c9577-4vrmt
	903a74e2d7853       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   0f104490635a6       storage-provisioner
	e40a4f9b16a60       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   475dc21959dca       coredns-66bc5c9577-4vrmt
	6f0081db32335       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   1eaf123c6da9f       kindnet-l54kz
	d99db3f0a539a       df0860106674d       7 minutes ago       Exited              kube-proxy                0                   546d39f814afe       kube-proxy-2wtsn
	bbe132d91cab0       46169d968e920       7 minutes ago       Exited              kube-scheduler            0                   0f4b676619c64       kube-scheduler-functional-459506
	15228ae0744fa       5f1f5298c888d       7 minutes ago       Exited              etcd                      0                   91a9c6f7a15e2       etcd-functional-459506
	
	
	==> containerd <==
	Sep 26 22:42:48 functional-459506 containerd[3896]: time="2025-09-26T22:42:48.312624210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-5xhv2,Uid:f48c8cd4-f309-4e69-a0b4-7c297b8f118d,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4251ce061e261140b67f10547a969b2535f709ba73e52a5be9bab90e3703fa3d\""
	Sep 26 22:42:48 functional-459506 containerd[3896]: time="2025-09-26T22:42:48.314935285Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:42:48 functional-459506 containerd[3896]: time="2025-09-26T22:42:48.316438266Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:48 functional-459506 containerd[3896]: time="2025-09-26T22:42:48.322094095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-59n29,Uid:ff1b0900-53c3-461c-b185-87f7165859ca,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"dfe6cb1af66cf702f4caa57ceca539e6ed327391a0be101038255ae202daa2a2\""
	Sep 26 22:42:48 functional-459506 containerd[3896]: time="2025-09-26T22:42:48.897084170Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:50 functional-459506 containerd[3896]: time="2025-09-26T22:42:50.551614996Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:42:50 functional-459506 containerd[3896]: time="2025-09-26T22:42:50.551666605Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 26 22:42:50 functional-459506 containerd[3896]: time="2025-09-26T22:42:50.552443213Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 22:42:50 functional-459506 containerd[3896]: time="2025-09-26T22:42:50.553651286Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:51 functional-459506 containerd[3896]: time="2025-09-26T22:42:51.145819484Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:52 functional-459506 containerd[3896]: time="2025-09-26T22:42:52.789273677Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:42:52 functional-459506 containerd[3896]: time="2025-09-26T22:42:52.789335959Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 26 22:42:53 functional-459506 containerd[3896]: time="2025-09-26T22:42:53.946747900Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Sep 26 22:42:53 functional-459506 containerd[3896]: time="2025-09-26T22:42:53.948234240Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:54 functional-459506 containerd[3896]: time="2025-09-26T22:42:54.527409436Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:42:57 functional-459506 containerd[3896]: time="2025-09-26T22:42:57.005886001Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:42:57 functional-459506 containerd[3896]: time="2025-09-26T22:42:57.005928557Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21214"
	Sep 26 22:43:04 functional-459506 containerd[3896]: time="2025-09-26T22:43:04.948681309Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:43:04 functional-459506 containerd[3896]: time="2025-09-26T22:43:04.950329775Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:43:05 functional-459506 containerd[3896]: time="2025-09-26T22:43:05.539001703Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:43:07 functional-459506 containerd[3896]: time="2025-09-26T22:43:07.184277852Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:43:07 functional-459506 containerd[3896]: time="2025-09-26T22:43:07.184351729Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 26 22:43:07 functional-459506 containerd[3896]: time="2025-09-26T22:43:07.184969735Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 22:43:07 functional-459506 containerd[3896]: time="2025-09-26T22:43:07.186359328Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:43:07 functional-459506 containerd[3896]: time="2025-09-26T22:43:07.780589077Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [4a47257142396d0a917fecabd4ae47f729eb1ab3570ffb7517ff9f5248fd93df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48290 - 21129 "HINFO IN 8280138097893442510.5169536380750645255. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023990945s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e40a4f9b16a6001c5ae0925a33fdc6dedeeb89585171a66821936c02876500f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51300 - 29788 "HINFO IN 4563362523290822031.8774679367264300029. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069789178s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-459506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-459506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-459506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-459506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:42:43 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:42:43 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:42:43 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:42:43 +0000   Fri, 26 Sep 2025 22:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-459506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 05e4574455ab4b559c781aee570b04b3
	  System UUID:                d46c27bc-3376-49b5-80bd-4cdd4f761af8
	  Boot ID:                    d6777c8b-c717-4851-a50e-a884fc659348
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c4qtx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-node-connect-7d85dfc575-g9scz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-4vrmt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m27s
	  kube-system                 etcd-functional-459506                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m33s
	  kube-system                 kindnet-l54kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m28s
	  kube-system                 kube-apiserver-functional-459506              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-459506     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-2wtsn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-scheduler-functional-459506              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5xhv2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-59n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m33s                  kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m33s                  kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s                  kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m28s                  node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m34s)  kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m34s)  kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m34s)  kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s                  node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	
	
	==> dmesg <==
	[Sep26 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001877] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.387443] i8042: Warning: Keylock active
	[  +0.011484] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004689] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000998] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000986] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001004] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001049] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001043] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448971] block sda: the capability attribute has been deprecated.
	[  +0.076726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021403] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.907524] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15228ae0744fa3d8d71e9ed9acb7601ebe23cd47d92475f3358c2b085a409570] <==
	{"level":"warn","ts":"2025-09-26T22:35:32.476089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.482777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.488658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.494335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.509868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.515650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.567513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:36:32.055542Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:36:32.055623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:36:32.055736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057441Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.057539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:36:32.058027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:36:32.058019Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:36:32.057993Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058223Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:36:32.058230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:36:32.058236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.059913Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:36:32.059967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.060006Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:36:32.060044Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [16663cf3fd5d10b83679013fbc8fc1c36cf64834b3eae54f2ef5c88da055361c] <==
	{"level":"warn","ts":"2025-09-26T22:36:36.375921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.383158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.396677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.402507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.408943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.415035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.422084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.429205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.435131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.441633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.448848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.454809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.462310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.475036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.480895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.487563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.493602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.499669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.505495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.512230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.519707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.529837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.536990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.543375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.594597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:43:08 up 25 min,  0 users,  load average: 0.38, 0.48, 0.50
	Linux functional-459506 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f0081db3233525107e5885f7a265bdd7fc9f0e70cd992771d9aaa4ca5682337] <==
	I0926 22:35:41.942119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:35:41.942327       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:35:41.942436       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:35:41.942455       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:35:41.942472       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:35:42.141703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:35:42.141777       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:35:42.141794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:35:42.142354       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0926 22:35:42.541865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:35:42.541885       1 metrics.go:72] Registering metrics
	I0926 22:35:42.541946       1 controller.go:711] "Syncing nftables rules"
	I0926 22:35:52.143620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:52.143696       1 main.go:301] handling current node
	I0926 22:36:02.146896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:02.146939       1 main.go:301] handling current node
	I0926 22:36:12.150828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:12.150867       1 main.go:301] handling current node
	
	
	==> kindnet [8bd6c0af7c48b340de1bf3a68946c513cc533581ddd4d6b0e4bf351239517410] <==
	I0926 22:41:02.890766       1 main.go:301] handling current node
	I0926 22:41:12.891865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:41:12.891895       1 main.go:301] handling current node
	I0926 22:41:22.891219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:41:22.891258       1 main.go:301] handling current node
	I0926 22:41:32.900349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:41:32.900381       1 main.go:301] handling current node
	I0926 22:41:42.891711       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:41:42.891790       1 main.go:301] handling current node
	I0926 22:41:52.899626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:41:52.899658       1 main.go:301] handling current node
	I0926 22:42:02.897378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:02.897411       1 main.go:301] handling current node
	I0926 22:42:12.891966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:12.891998       1 main.go:301] handling current node
	I0926 22:42:22.891182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:22.891216       1 main.go:301] handling current node
	I0926 22:42:32.891882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:32.891920       1 main.go:301] handling current node
	I0926 22:42:42.891864       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:42.891909       1 main.go:301] handling current node
	I0926 22:42:52.891521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:52.891549       1 main.go:301] handling current node
	I0926 22:43:02.891882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:02.891918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5a30b07600415b080587a2a6d1ea08b2055828357a99617f952c06563d727e2] <==
	W0926 22:36:38.252795       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0926 22:36:38.254354       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 22:36:38.259391       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 22:36:38.800412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0926 22:36:38.891708       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0926 22:36:38.938169       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 22:36:38.943133       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 22:36:40.467583       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0926 22:36:55.972979       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.201.64"}
	I0926 22:37:00.548826       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.39.241"}
	I0926 22:37:01.255927       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.193.82"}
	I0926 22:37:01.977342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.187.36"}
	I0926 22:37:36.883256       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:42.252049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:40.206792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:01.111456       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:58.542426       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:18.473793       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:05.376069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:39.665227       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:31.940639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:47.816380       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:42:47.904783       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.131.25"}
	I0926 22:42:47.920679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.240"}
	I0926 22:42:56.161937       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6989a06c1aa044081666ea274870f6b2f62081f15fddafd098ceec849ef63965] <==
	I0926 22:36:23.268600       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:36:23.628583       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:36:23.628607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:23.630025       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:36:23.630038       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:36:23.630385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:36:23.630414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:36:33.632748       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8e603c814a88fbfef59bb33f84ea361bd131e385ab2a4d76cc74bde2bcfaea0d] <==
	I0926 22:36:40.364466       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:36:40.364501       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:36:40.364514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:36:40.364568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:36:40.364622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:36:40.364632       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:36:40.364686       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:36:40.364719       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:36:40.364730       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0926 22:36:40.364834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-459506"
	I0926 22:36:40.364892       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:36:40.366981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:36:40.370586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370617       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370637       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:36:40.372783       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:36:40.375018       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:36:40.377335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:36:40.385601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:42:47.862065       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.865843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.866048       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.868815       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.870826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.874609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a264dd8f5b4a2942f0efee0b51ce7ed0adb4b1ad43db0f5b5f0c22c0ba88de78] <==
	I0926 22:36:22.558358       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:36:22.559394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:23.538404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:26.410869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:31.532055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:36:38.959447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:36:38.959487       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:36:38.959582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:36:38.986993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:36:38.987069       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:36:38.994049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:36:38.994605       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:36:38.994630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:38.997310       1 config.go:200] "Starting service config controller"
	I0926 22:36:38.997330       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:36:38.997362       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:36:38.997368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:36:38.997424       1 config.go:309] "Starting node config controller"
	I0926 22:36:38.997430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:36:38.997436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:36:38.997657       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:36:38.997669       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:36:39.097677       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:36:39.097747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:36:39.098062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d99db3f0a539a19d9cf4e02c8429489ff255a6c5d2fe9f2573700d0ce0397f8f] <==
	I0926 22:35:41.509205       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:35:41.575220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:35:41.675605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:35:41.675637       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:35:41.675771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:35:41.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:35:41.699490       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:35:41.705720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:35:41.706093       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:35:41.706127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:35:41.707545       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:35:41.707554       1 config.go:200] "Starting service config controller"
	I0926 22:35:41.707573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:35:41.707594       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:35:41.707612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:35:41.707575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:35:41.707672       1 config.go:309] "Starting node config controller"
	I0926 22:35:41.707679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:35:41.707684       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:35:41.807791       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:35:41.807805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:35:41.807837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe132d91cab00583cfbee8fc0b2b826f5d89380f0d1522dccdf84bc4002a864] <==
	E0926 22:35:32.972891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:35:32.972938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:35:32.972966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:35:32.972988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:32.973074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:35:32.973076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:32.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:35:32.973193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:35:32.973192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:35:32.973179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:35:33.793455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:33.799444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:35:33.877548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:35:33.893413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:35:33.999974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:35:34.069240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:35:34.105348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:35:34.130498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:34.140448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:35:34.470155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883098       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:36:21.883227       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:36:21.883331       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:36:21.883366       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c894d70efe2fc6d275b679dc3901194c6f6800fe43d0055daf8fb4de89bdf15a] <==
	E0926 22:36:28.212606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:28.310457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:28.412275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:28.443003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:28.534103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:31.138080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:31.354330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:36:31.367786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:31.510528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:31.521081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:31.837947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:32.252990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:32.286651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:32.320204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:32.616030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:32.939676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:33.405067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:33.435786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:33.459236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:33.593227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:33.755685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:36:34.225507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:34.380598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:34.435490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0926 22:36:46.721125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:42:48 functional-459506 kubelet[4881]: E0926 22:42:48.179392    4881 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 26 22:42:48 functional-459506 kubelet[4881]: E0926 22:42:48.179485    4881 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-g9scz_default(3352791e-ffd2-43f2-a616-6553c6db8a5f): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:42:48 functional-459506 kubelet[4881]: E0926 22:42:48.179519    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:42:50 functional-459506 kubelet[4881]: E0926 22:42:50.551918    4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:42:50 functional-459506 kubelet[4881]: E0926 22:42:50.551987    4881 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:42:50 functional-459506 kubelet[4881]: E0926 22:42:50.552201    4881 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2_kubernetes-dashboard(f48c8cd4-f309-4e69-a0b4-7c297b8f118d): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:42:50 functional-459506 kubelet[4881]: E0926 22:42:50.552268    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c297b8f118d"
	Sep 26 22:42:50 functional-459506 kubelet[4881]: E0926 22:42:50.790982    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:42:52 functional-459506 kubelet[4881]: E0926 22:42:52.789555    4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:42:52 functional-459506 kubelet[4881]: E0926 22:42:52.789609    4881 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:42:52 functional-459506 kubelet[4881]: E0926 22:42:52.789695    4881 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-59n29_kubernetes-dashboard(ff1b0900-53c3-461c-b185-87f7165859ca): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:42:52 functional-459506 kubelet[4881]: E0926 22:42:52.789733    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:42:52 functional-459506 kubelet[4881]: E0926 22:42:52.795492    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:42:56 functional-459506 kubelet[4881]: E0926 22:42:56.947374    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:42:57 functional-459506 kubelet[4881]: E0926 22:42:57.006195    4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:42:57 functional-459506 kubelet[4881]: E0926 22:42:57.006235    4881 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:42:57 functional-459506 kubelet[4881]: E0926 22:42:57.006324    4881 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(b5494cea-410c-40a9-85da-5cc71c798527): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:42:57 functional-459506 kubelet[4881]: E0926 22:42:57.006357    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:42:58 functional-459506 kubelet[4881]: E0926 22:42:58.946492    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:43:02 functional-459506 kubelet[4881]: E0926 22:43:02.947267    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:43:07 functional-459506 kubelet[4881]: E0926 22:43:07.184543    4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:43:07 functional-459506 kubelet[4881]: E0926 22:43:07.184595    4881 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:43:07 functional-459506 kubelet[4881]: E0926 22:43:07.184828    4881 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2_kubernetes-dashboard(f48c8cd4-f309-4e69-a0b4-7c297b8f118d): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:43:07 functional-459506 kubelet[4881]: E0926 22:43:07.184900    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c297b8f118d"
	Sep 26 22:43:08 functional-459506 kubelet[4881]: E0926 22:43:08.947530    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	
	
	==> storage-provisioner [903a74e2d785332eef5dd63e71cab7027811128118514bd84afbc9721ac5c416] <==
	I0926 22:36:12.358555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:36:12.365148       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:36:12.365186       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:36:12.367487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.373103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.373284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:36:12.373437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	I0926 22:36:12.373420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcaf19cb-0770-4ca7-b54d-720d909e89f2", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7 became leader
	W0926 22:36:12.375205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.377966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.474582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	W0926 22:36:14.382126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:14.388844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.392242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.396206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cebf1f1ed6be19b56dc23481a5410552eccab7653863a9a3e2d0911b4bdc8aa3] <==
	W0926 22:42:44.855882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:46.859502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:46.863275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:48.865858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:48.870091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:50.872573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:50.875621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:52.878808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:52.881909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:54.884463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:54.888307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:56.890698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:56.895812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:58.898541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:42:58.902138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:00.905960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:00.909812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:02.912061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:02.916341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:04.918782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:04.922513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:06.925406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:06.928769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:08.931426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:08.935224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
helpers_test.go:269: (dbg) Run:  kubectl --context functional-459506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1 (86.421298ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:42:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ca62526b2c327497c75dc175ee6636f9d7c65b49b65c963619f5f8b5205b4a44
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:42:33 +0000
	      Finished:     Fri, 26 Sep 2025 22:42:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksn8n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ksn8n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  39s   default-scheduler  Successfully assigned default/busybox-mount to functional-459506
	  Normal  Pulling    38s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     36s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.295s (2.295s including waiting). Image size: 2395207 bytes.
	  Normal  Created    36s   kubelet            Created container: mount-munger
	  Normal  Started    36s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-c4qtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p27jz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p27jz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c4qtx to functional-459506
	  Warning  Failed     4m41s (x3 over 5m51s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m21s (x5 over 6m9s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m18s (x2 over 6m6s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m18s (x5 over 6m6s)   kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x20 over 6m5s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    41s (x21 over 6m5s)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-g9scz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn97f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zn97f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-g9scz to functional-459506
	  Normal   Pulling    3m17s (x5 over 6m7s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m14s (x5 over 6m2s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m14s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     60s (x20 over 6m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    47s (x21 over 6m1s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk7pr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk7pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/nginx-svc to functional-459506
	  Normal   Pulling    2m53s (x5 over 6m8s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m50s (x5 over 6m4s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m50s (x5 over 6m4s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s (x20 over 6m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    42s (x21 over 6m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:07 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv4kq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv4kq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-459506
	  Warning  Failed     5m59s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m6s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m3s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m3s (x4 over 5m42s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s (x19 over 5m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    28s (x21 over 5m58s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5xhv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-59n29" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-459506 describe pod busybox-mount hello-node-75c85bcc94-c4qtx hello-node-connect-7d85dfc575-g9scz nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-459506 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-cv8kj" [0463eed8-e7cc-4a57-a2a5-94ce2843b138] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0926 22:46:57.765292   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-26 22:53:10.239162226 +0000 UTC m=+1454.494369476
functional_test.go:1804: (dbg) Run:  kubectl --context functional-459506 describe po mysql-5bb876957f-cv8kj -n default
functional_test.go:1804: (dbg) kubectl --context functional-459506 describe po mysql-5bb876957f-cv8kj -n default:
Name:             mysql-5bb876957f-cv8kj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:43:09 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549ls (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-549ls:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-cv8kj to functional-459506
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m58s (x5 over 9m58s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m58s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x19 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m31s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-459506 logs mysql-5bb876957f-cv8kj -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-459506 logs mysql-5bb876957f-cv8kj -n default: exit status 1 (62.747012ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-cv8kj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-459506 logs mysql-5bb876957f-cv8kj -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-459506
helpers_test.go:243: (dbg) docker inspect functional-459506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	        "Created": "2025-09-26T22:35:21.920836916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:35:21.951781694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hostname",
	        "HostsPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/hosts",
	        "LogPath": "/var/lib/docker/containers/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917/d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917-json.log",
	        "Name": "/functional-459506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-459506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-459506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d095d86ee54b789c0264c8ea1f1fab7f3405e518f1a24ca9897ce7c3ad464917",
	                "LowerDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd668024ca4cca7750265350f3fd8afee0721ce008e144a8f8a5b04847ef3880/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-459506",
	                "Source": "/var/lib/docker/volumes/functional-459506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-459506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-459506",
	                "name.minikube.sigs.k8s.io": "functional-459506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb0f8342093a0b817dd54ab2bfc7283d5c3b97c478a905330b0fb0f03d232a34",
	            "SandboxKey": "/var/run/docker/netns/fb0f8342093a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-459506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:64:7a:80:ed:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1d72584285bd0f2762e93cd89eea0f410798a5f4c51ad294c42f4fa0b4247fe",
	                    "EndpointID": "d3c98e2363a4eab3bdc87cfbc565ff15bb3e69f484dbf18a36fe7e0d357135a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-459506",
	                        "d095d86ee54b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-459506 -n functional-459506
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs -n 25: (1.318456528s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-459506 image save kicbase/echo-server:functional-459506 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image rm kicbase/echo-server:functional-459506 --alsologtostderr                                                                                    │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ image          │ functional-459506 image save --daemon kicbase/echo-server:functional-459506 --alsologtostderr                                                                         │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │ 26 Sep 25 22:42 UTC │
	│ start          │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start          │ -p functional-459506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ start          │ -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                                       │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-459506 --alsologtostderr -v=1                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:42 UTC │                     │
	│ service        │ functional-459506 service list                                                                                                                                        │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service list -o json                                                                                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service --namespace=default --https --url hello-node                                                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ service        │ functional-459506 service hello-node --url --format={{.IP}}                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ service        │ functional-459506 service hello-node --url                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ update-context │ functional-459506 update-context --alsologtostderr -v=2                                                                                                               │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format short --alsologtostderr                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format yaml --alsologtostderr                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ ssh            │ functional-459506 ssh pgrep buildkitd                                                                                                                                 │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ image          │ functional-459506 image build -t localhost/my-image:functional-459506 testdata/build --alsologtostderr                                                                │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format json --alsologtostderr                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls --format table --alsologtostderr                                                                                                           │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ image          │ functional-459506 image ls                                                                                                                                            │ functional-459506 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:42:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:42:46.896482   62711 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.896577   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896585   62711 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.896589   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896870   62711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.897298   62711 out.go:368] Setting JSON to false
	I0926 22:42:46.898165   62711 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.898235   62711 start.go:140] virtualization: kvm guest
	I0926 22:42:46.899971   62711 out.go:179] * [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.900984   62711 notify.go:220] Checking for updates...
	I0926 22:42:46.900989   62711 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.901968   62711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.902910   62711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.904148   62711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.905129   62711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.906068   62711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.907486   62711 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.908008   62711 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.930088   62711 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.930160   62711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.982478   62711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.973277108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.982575   62711 docker.go:318] overlay module found
	I0926 22:42:46.984004   62711 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:42:46.985050   62711 start.go:304] selected driver: docker
	I0926 22:42:46.985070   62711 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.985173   62711 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.986851   62711 out.go:203] 
	W0926 22:42:46.987810   62711 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:42:46.988796   62711 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecfca78cdf2eb       9056ab77afb8e       6 seconds ago       Running             echo-server               0                   efcad9f8183ec       hello-node-connect-7d85dfc575-g9scz
	4391ea9d1dd50       9056ab77afb8e       12 seconds ago      Running             echo-server               0                   816a84600ca1d       hello-node-75c85bcc94-c4qtx
	ca62526b2c327       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   fecae25f41ca9       busybox-mount
	cebf1f1ed6be1       6e38f40d628db       16 minutes ago      Running             storage-provisioner       2                   0f104490635a6       storage-provisioner
	e5a30b0760041       90550c43ad2bc       16 minutes ago      Running             kube-apiserver            0                   78751bafaf7a4       kube-apiserver-functional-459506
	8e603c814a88f       a0af72f2ec6d6       16 minutes ago      Running             kube-controller-manager   2                   9f72adfa3efa4       kube-controller-manager-functional-459506
	16663cf3fd5d1       5f1f5298c888d       16 minutes ago      Running             etcd                      1                   91a9c6f7a15e2       etcd-functional-459506
	6989a06c1aa04       a0af72f2ec6d6       16 minutes ago      Exited              kube-controller-manager   1                   9f72adfa3efa4       kube-controller-manager-functional-459506
	c894d70efe2fc       46169d968e920       16 minutes ago      Running             kube-scheduler            1                   0f4b676619c64       kube-scheduler-functional-459506
	a264dd8f5b4a2       df0860106674d       16 minutes ago      Running             kube-proxy                1                   546d39f814afe       kube-proxy-2wtsn
	8bd6c0af7c48b       409467f978b4a       16 minutes ago      Running             kindnet-cni               1                   1eaf123c6da9f       kindnet-l54kz
	4a47257142396       52546a367cc9e       16 minutes ago      Running             coredns                   1                   475dc21959dca       coredns-66bc5c9577-4vrmt
	903a74e2d7853       6e38f40d628db       16 minutes ago      Exited              storage-provisioner       1                   0f104490635a6       storage-provisioner
	e40a4f9b16a60       52546a367cc9e       17 minutes ago      Exited              coredns                   0                   475dc21959dca       coredns-66bc5c9577-4vrmt
	6f0081db32335       409467f978b4a       17 minutes ago      Exited              kindnet-cni               0                   1eaf123c6da9f       kindnet-l54kz
	d99db3f0a539a       df0860106674d       17 minutes ago      Exited              kube-proxy                0                   546d39f814afe       kube-proxy-2wtsn
	bbe132d91cab0       46169d968e920       17 minutes ago      Exited              kube-scheduler            0                   0f4b676619c64       kube-scheduler-functional-459506
	15228ae0744fa       5f1f5298c888d       17 minutes ago      Exited              etcd                      0                   91a9c6f7a15e2       etcd-functional-459506
	
	
	==> containerd <==
	Sep 26 22:52:56 functional-459506 containerd[3896]: time="2025-09-26T22:52:56.949395911Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:52:57 functional-459506 containerd[3896]: time="2025-09-26T22:52:57.561862884Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.443649701Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.444347215Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=12115"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.445290658Z" level=info msg="ImageUpdate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.446977285Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.447458374Z" level=info msg="Pulled image \"kicbase/echo-server:latest\" with image id \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\", repo tag \"docker.io/kicbase/echo-server:latest\", repo digest \"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\", size \"2138418\" in 1.499558683s"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.447490345Z" level=info msg="PullImage \"kicbase/echo-server:latest\" returns image reference \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.451495413Z" level=info msg="CreateContainer within sandbox \"816a84600ca1d50227340e8fb0a96b0041595c1ed07bb40a4555261669f3743f\" for container &ContainerMetadata{Name:echo-server,Attempt:0,}"
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.461314282Z" level=info msg="CreateContainer within sandbox \"816a84600ca1d50227340e8fb0a96b0041595c1ed07bb40a4555261669f3743f\" for &ContainerMetadata{Name:echo-server,Attempt:0,} returns container id \"4391ea9d1dd50161105cd3a2f841930befadd267e613844bf4f5bb91a6952981\""
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.461741570Z" level=info msg="StartContainer for \"4391ea9d1dd50161105cd3a2f841930befadd267e613844bf4f5bb91a6952981\""
	Sep 26 22:52:58 functional-459506 containerd[3896]: time="2025-09-26T22:52:58.512511801Z" level=info msg="StartContainer for \"4391ea9d1dd50161105cd3a2f841930befadd267e613844bf4f5bb91a6952981\" returns successfully"
	Sep 26 22:53:03 functional-459506 containerd[3896]: time="2025-09-26T22:53:03.947510028Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Sep 26 22:53:03 functional-459506 containerd[3896]: time="2025-09-26T22:53:03.948910131Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.547931489Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.559724013Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.560363294Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=5423"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.561804327Z" level=info msg="Pulled image \"kicbase/echo-server:latest\" with image id \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\", repo tag \"docker.io/kicbase/echo-server:latest\", repo digest \"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\", size \"2138418\" in 614.254531ms"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.561834169Z" level=info msg="PullImage \"kicbase/echo-server:latest\" returns image reference \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.565420281Z" level=info msg="CreateContainer within sandbox \"efcad9f8183ec7ec70379359403982761b9e8ee408e56378b0ee1050eb53e146\" for container &ContainerMetadata{Name:echo-server,Attempt:0,}"
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.576866284Z" level=info msg="CreateContainer within sandbox \"efcad9f8183ec7ec70379359403982761b9e8ee408e56378b0ee1050eb53e146\" for &ContainerMetadata{Name:echo-server,Attempt:0,} returns container id \"ecfca78cdf2eb323c640c40f53f3151af0caaec9abe646681d251a9df847a44b\""
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.577277753Z" level=info msg="StartContainer for \"ecfca78cdf2eb323c640c40f53f3151af0caaec9abe646681d251a9df847a44b\""
	Sep 26 22:53:04 functional-459506 containerd[3896]: time="2025-09-26T22:53:04.628724590Z" level=info msg="StartContainer for \"ecfca78cdf2eb323c640c40f53f3151af0caaec9abe646681d251a9df847a44b\" returns successfully"
	Sep 26 22:53:10 functional-459506 containerd[3896]: time="2025-09-26T22:53:10.948180610Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Sep 26 22:53:10 functional-459506 containerd[3896]: time="2025-09-26T22:53:10.949541219Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [4a47257142396d0a917fecabd4ae47f729eb1ab3570ffb7517ff9f5248fd93df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48290 - 21129 "HINFO IN 8280138097893442510.5169536380750645255. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023990945s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e40a4f9b16a6001c5ae0925a33fdc6dedeeb89585171a66821936c02876500f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51300 - 29788 "HINFO IN 4563362523290822031.8774679367264300029. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069789178s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-459506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-459506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-459506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-459506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:53:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:53:05 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:53:05 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:53:05 +0000   Fri, 26 Sep 2025 22:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:53:05 +0000   Fri, 26 Sep 2025 22:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-459506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 05e4574455ab4b559c781aee570b04b3
	  System UUID:                d46c27bc-3376-49b5-80bd-4cdd4f761af8
	  Boot ID:                    d6777c8b-c717-4851-a50e-a884fc659348
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c4qtx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-g9scz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     mysql-5bb876957f-cv8kj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-4vrmt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-459506                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-l54kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-459506              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-459506     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-2wtsn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-459506              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5xhv2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-59n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-459506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-459506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-459506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node functional-459506 event: Registered Node functional-459506 in Controller
	
	
	==> dmesg <==
	[Sep26 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001877] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.387443] i8042: Warning: Keylock active
	[  +0.011484] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004689] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000998] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000986] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001004] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001049] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001043] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448971] block sda: the capability attribute has been deprecated.
	[  +0.076726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021403] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.907524] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15228ae0744fa3d8d71e9ed9acb7601ebe23cd47d92475f3358c2b085a409570] <==
	{"level":"warn","ts":"2025-09-26T22:35:32.476089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.482777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.488658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.494335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.509868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.515650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:35:32.567513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:36:32.055542Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:36:32.055623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:36:32.055736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:36:32.057441Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.057539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:36:32.058027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:36:32.058019Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:36:32.057993Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:36:32.058223Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:36:32.058230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:36:32.058236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.059913Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:36:32.059967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:36:32.060006Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:36:32.060044Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-459506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [16663cf3fd5d10b83679013fbc8fc1c36cf64834b3eae54f2ef5c88da055361c] <==
	{"level":"warn","ts":"2025-09-26T22:36:36.422084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.429205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.435131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.441633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.448848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.454809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.462310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.475036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.480895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.487563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.493602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.499669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.505495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.512230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.519707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.529837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.536990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.543375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:36.594597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:36.125492Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-09-26T22:46:36.144507Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"18.66727ms","hash":1726823260,"current-db-size-bytes":3829760,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1908736,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-26T22:46:36.144558Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1726823260,"revision":1071,"compact-revision":-1}
	{"level":"info","ts":"2025-09-26T22:51:36.130187Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1581}
	{"level":"info","ts":"2025-09-26T22:51:36.133760Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1581,"took":"3.210595ms","hash":1305363483,"current-db-size-bytes":3829760,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-09-26T22:51:36.133796Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1305363483,"revision":1581,"compact-revision":1071}
	
	
	==> kernel <==
	 22:53:11 up 35 min,  0 users,  load average: 0.04, 0.13, 0.29
	Linux functional-459506 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f0081db3233525107e5885f7a265bdd7fc9f0e70cd992771d9aaa4ca5682337] <==
	I0926 22:35:41.942119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:35:41.942327       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:35:41.942436       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:35:41.942455       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:35:41.942472       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:35:42.141703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:35:42.141777       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:35:42.141794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:35:42.142354       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0926 22:35:42.541865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:35:42.541885       1 metrics.go:72] Registering metrics
	I0926 22:35:42.541946       1 controller.go:711] "Syncing nftables rules"
	I0926 22:35:52.143620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:52.143696       1 main.go:301] handling current node
	I0926 22:36:02.146896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:02.146939       1 main.go:301] handling current node
	I0926 22:36:12.150828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:36:12.150867       1 main.go:301] handling current node
	
	
	==> kindnet [8bd6c0af7c48b340de1bf3a68946c513cc533581ddd4d6b0e4bf351239517410] <==
	I0926 22:51:02.890969       1 main.go:301] handling current node
	I0926 22:51:12.891583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:51:12.891613       1 main.go:301] handling current node
	I0926 22:51:22.890968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:51:22.891000       1 main.go:301] handling current node
	I0926 22:51:32.890867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:51:32.890897       1 main.go:301] handling current node
	I0926 22:51:42.890728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:51:42.890799       1 main.go:301] handling current node
	I0926 22:51:52.896892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:51:52.896930       1 main.go:301] handling current node
	I0926 22:52:02.891302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:02.891335       1 main.go:301] handling current node
	I0926 22:52:12.891280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:12.891313       1 main.go:301] handling current node
	I0926 22:52:22.890976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:22.891011       1 main.go:301] handling current node
	I0926 22:52:32.897405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:32.897434       1 main.go:301] handling current node
	I0926 22:52:42.891307       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:42.891341       1 main.go:301] handling current node
	I0926 22:52:52.895846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:52:52.895875       1 main.go:301] handling current node
	I0926 22:53:02.893804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:53:02.893838       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5a30b07600415b080587a2a6d1ea08b2055828357a99617f952c06563d727e2] <==
	I0926 22:41:05.376069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:39.665227       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:31.940639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:47.816380       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:42:47.904783       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.131.25"}
	I0926 22:42:47.920679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.240"}
	I0926 22:42:56.161937       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:09.909055       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.136"}
	I0926 22:43:44.613318       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:04.440996       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:46.386965       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:15.977328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:10.613817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:46:36.975011       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:46:44.738140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:39.547504       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:00.320378       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:52.444421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:07.771629       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:15.260799       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:33.195801       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:20.145428       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:49.913501       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.726799       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:09.386370       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6989a06c1aa044081666ea274870f6b2f62081f15fddafd098ceec849ef63965] <==
	I0926 22:36:23.268600       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:36:23.628583       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:36:23.628607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:23.630025       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:36:23.630038       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:36:23.630385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:36:23.630414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:36:33.632748       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8e603c814a88fbfef59bb33f84ea361bd131e385ab2a4d76cc74bde2bcfaea0d] <==
	I0926 22:36:40.364466       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:36:40.364501       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:36:40.364514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:36:40.364568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:36:40.364622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:36:40.364632       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:36:40.364686       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:36:40.364719       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:36:40.364730       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0926 22:36:40.364834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-459506"
	I0926 22:36:40.364892       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:36:40.366981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:36:40.370586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370617       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:36:40.370637       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:36:40.372783       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:36:40.375018       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:36:40.377335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:36:40.385601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:42:47.862065       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.865843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.866048       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.868815       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.870826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:42:47.874609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a264dd8f5b4a2942f0efee0b51ce7ed0adb4b1ad43db0f5b5f0c22c0ba88de78] <==
	I0926 22:36:22.558358       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:36:22.559394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:23.538404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:26.410869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:31.532055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-459506&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:36:38.959447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:36:38.959487       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:36:38.959582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:36:38.986993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:36:38.987069       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:36:38.994049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:36:38.994605       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:36:38.994630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:36:38.997310       1 config.go:200] "Starting service config controller"
	I0926 22:36:38.997330       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:36:38.997362       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:36:38.997368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:36:38.997424       1 config.go:309] "Starting node config controller"
	I0926 22:36:38.997430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:36:38.997436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:36:38.997657       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:36:38.997669       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:36:39.097677       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:36:39.097747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:36:39.098062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d99db3f0a539a19d9cf4e02c8429489ff255a6c5d2fe9f2573700d0ce0397f8f] <==
	I0926 22:35:41.509205       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:35:41.575220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:35:41.675605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:35:41.675637       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:35:41.675771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:35:41.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:35:41.699490       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:35:41.705720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:35:41.706093       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:35:41.706127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:35:41.707545       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:35:41.707554       1 config.go:200] "Starting service config controller"
	I0926 22:35:41.707573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:35:41.707594       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:35:41.707612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:35:41.707575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:35:41.707672       1 config.go:309] "Starting node config controller"
	I0926 22:35:41.707679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:35:41.707684       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:35:41.807791       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:35:41.807805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:35:41.807837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe132d91cab00583cfbee8fc0b2b826f5d89380f0d1522dccdf84bc4002a864] <==
	E0926 22:35:32.972891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:35:32.972938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:35:32.972966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:35:32.972988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:32.973074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:35:32.973076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:32.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:35:32.973193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:35:32.973192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:35:32.973179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:35:33.793455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:35:33.799444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:35:33.877548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:35:33.893413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:35:33.999974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:35:34.069240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:35:34.105348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:35:34.130498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:35:34.140448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:35:34.470155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883098       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:36:21.883123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:36:21.883227       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:36:21.883331       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:36:21.883366       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c894d70efe2fc6d275b679dc3901194c6f6800fe43d0055daf8fb4de89bdf15a] <==
	E0926 22:36:28.212606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:28.310457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:28.412275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:28.443003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:28.534103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:31.138080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:31.354330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:36:31.367786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:31.510528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:31.521081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:31.837947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:32.252990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:32.286651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:36:32.320204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:32.616030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:36:32.939676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:33.405067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:33.435786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:33.459236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:33.593227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:33.755685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:36:34.225507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:34.380598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:36:34.435490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0926 22:36:46.721125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:52:22 functional-459506 kubelet[4881]: E0926 22:52:22.950562    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:52:24 functional-459506 kubelet[4881]: E0926 22:52:24.950470    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:52:28 functional-459506 kubelet[4881]: E0926 22:52:28.947489    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:52:29 functional-459506 kubelet[4881]: E0926 22:52:29.947423    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:52:33 functional-459506 kubelet[4881]: E0926 22:52:33.947266    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:52:33 functional-459506 kubelet[4881]: E0926 22:52:33.947940    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:52:33 functional-459506 kubelet[4881]: E0926 22:52:33.947961    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:52:34 functional-459506 kubelet[4881]: E0926 22:52:34.947253    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:52:39 functional-459506 kubelet[4881]: E0926 22:52:39.947693    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:52:43 functional-459506 kubelet[4881]: E0926 22:52:43.946867    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:52:43 functional-459506 kubelet[4881]: E0926 22:52:43.947526    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:52:45 functional-459506 kubelet[4881]: E0926 22:52:45.947114    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-c4qtx" podUID="3d1f055e-4361-4aa1-83f9-7dc31c06573a"
	Sep 26 22:52:45 functional-459506 kubelet[4881]: E0926 22:52:45.947840    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:52:47 functional-459506 kubelet[4881]: E0926 22:52:47.947818    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:52:48 functional-459506 kubelet[4881]: E0926 22:52:48.947440    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-g9scz" podUID="3352791e-ffd2-43f2-a616-6553c6db8a5f"
	Sep 26 22:52:50 functional-459506 kubelet[4881]: E0926 22:52:50.947454    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:52:56 functional-459506 kubelet[4881]: E0926 22:52:56.948278    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	Sep 26 22:52:57 functional-459506 kubelet[4881]: E0926 22:52:57.947410    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="b5494cea-410c-40a9-85da-5cc71c798527"
	Sep 26 22:52:57 functional-459506 kubelet[4881]: E0926 22:52:57.948102    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:52:59 functional-459506 kubelet[4881]: I0926 22:52:59.097115    4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-c4qtx" podStartSLOduration=1.5534379010000001 podStartE2EDuration="15m59.097095779s" podCreationTimestamp="2025-09-26 22:37:00 +0000 UTC" firstStartedPulling="2025-09-26 22:37:00.904612148 +0000 UTC m=+26.038864529" lastFinishedPulling="2025-09-26 22:52:58.44827002 +0000 UTC m=+983.582522407" observedRunningTime="2025-09-26 22:52:59.096858225 +0000 UTC m=+984.231110621" watchObservedRunningTime="2025-09-26 22:52:59.097095779 +0000 UTC m=+984.231348175"
	Sep 26 22:53:01 functional-459506 kubelet[4881]: E0926 22:53:01.947376    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f0d2d088-a017-4e6f-8a58-bf2e6db70c49"
	Sep 26 22:53:02 functional-459506 kubelet[4881]: E0926 22:53:02.948187    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-cv8kj" podUID="0463eed8-e7cc-4a57-a2a5-94ce2843b138"
	Sep 26 22:53:05 functional-459506 kubelet[4881]: I0926 22:53:05.112606    4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-7d85dfc575-g9scz" podStartSLOduration=1.873541578 podStartE2EDuration="16m4.112583856s" podCreationTimestamp="2025-09-26 22:37:01 +0000 UTC" firstStartedPulling="2025-09-26 22:37:02.323505431 +0000 UTC m=+27.457757805" lastFinishedPulling="2025-09-26 22:53:04.562547705 +0000 UTC m=+989.696800083" observedRunningTime="2025-09-26 22:53:05.111901271 +0000 UTC m=+990.246153668" watchObservedRunningTime="2025-09-26 22:53:05.112583856 +0000 UTC m=+990.246836252"
	Sep 26 22:53:08 functional-459506 kubelet[4881]: E0926 22:53:08.948239    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5xhv2" podUID="f48c8cd4-f309-4e69-a0b4-7c29
7b8f118d"
	Sep 26 22:53:10 functional-459506 kubelet[4881]: E0926 22:53:10.948123    4881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-59n29" podUID="ff1b0900-53c3-461c-b185-87f7165859ca"
	
	
	==> storage-provisioner [903a74e2d785332eef5dd63e71cab7027811128118514bd84afbc9721ac5c416] <==
	I0926 22:36:12.358555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:36:12.365148       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:36:12.365186       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:36:12.367487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.373103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.373284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:36:12.373437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	I0926 22:36:12.373420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcaf19cb-0770-4ca7-b54d-720d909e89f2", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7 became leader
	W0926 22:36:12.375205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:12.377966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:36:12.474582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-459506_341486d7-6c55-48af-8df1-6e07d9290bc7!
	W0926 22:36:14.382126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:14.388844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.392242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:36:16.396206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cebf1f1ed6be19b56dc23481a5410552eccab7653863a9a3e2d0911b4bdc8aa3] <==
	W0926 22:52:46.862440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:48.865271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:48.868746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:50.871241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:50.874580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:52.877253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:52.881730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:54.884505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:54.888020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:56.891392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:56.896165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:58.899080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:52:58.902388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:00.906967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:00.910912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:02.914449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:02.918063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:04.920849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:04.925343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:06.927842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:06.931246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:08.933988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:08.938525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:10.942244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:10.947454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
helpers_test.go:269: (dbg) Run:  kubectl --context functional-459506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-459506 describe pod busybox-mount mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-459506 describe pod busybox-mount mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1 (79.06262ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:42:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ca62526b2c327497c75dc175ee6636f9d7c65b49b65c963619f5f8b5205b4a44
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:42:33 +0000
	      Finished:     Fri, 26 Sep 2025 22:42:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksn8n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ksn8n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-459506
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.295s (2.295s including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-cv8kj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:43:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549ls (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-549ls:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-cv8kj to functional-459506
	  Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x19 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m33s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk7pr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk7pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/nginx-svc to functional-459506
	  Normal   Pulling    12m (x5 over 16m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x5 over 16m)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 16m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    63s (x63 over 16m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     63s (x63 over 16m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-459506/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:37:07 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv4kq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv4kq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/sp-pod to functional-459506
	  Warning  Failed     16m                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    13m (x5 over 16m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     13m (x5 over 16m)   kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x4 over 15m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    55s (x64 over 16m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     55s (x64 over 16m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5xhv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-59n29" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-459506 describe pod busybox-mount mysql-5bb876957f-cv8kj nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-5xhv2 kubernetes-dashboard-855c9754f9-59n29: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-459506 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-459506 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-c4qtx" [3d1f055e-4361-4aa1-83f9-7dc31c06573a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-26 22:47:00.846075244 +0000 UTC m=+1085.101282323
functional_test.go:1460: (dbg) Run:  kubectl --context functional-459506 describe po hello-node-75c85bcc94-c4qtx -n default
functional_test.go:1460: (dbg) kubectl --context functional-459506 describe po hello-node-75c85bcc94-c4qtx -n default:
Name:             hello-node-75c85bcc94-c4qtx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:37:00 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p27jz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-p27jz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c4qtx to functional-459506
Warning  Failed     8m32s (x3 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m12s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x2 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m9s (x5 over 9m57s)    kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m32s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-459506 logs hello-node-75c85bcc94-c4qtx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-459506 logs hello-node-75c85bcc94-c4qtx -n default: exit status 1 (65.861775ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-c4qtx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-459506 logs hello-node-75c85bcc94-c4qtx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-459506 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f0d2d088-a017-4e6f-8a58-bf2e6db70c49] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-459506 -n functional-459506
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-26 22:41:01.551853803 +0000 UTC m=+725.807060898
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-459506 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-459506 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-459506/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:37:01 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk7pr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rk7pr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-459506
Normal   Pulling    45s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     42s (x5 over 3m56s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     42s (x5 over 3m56s)  kubelet            Error: ErrImagePull
Normal   BackOff    6s (x14 over 3m55s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     6s (x14 over 3m55s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-459506 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-459506 logs nginx-svc -n default: exit status 1 (64.30627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-459506 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0926 22:41:01.675510   13040 retry.go:31] will retry after 4.103550224s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:41:05.779267   13040 retry.go:31] will retry after 4.09864085s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:41:09.878492   13040 retry.go:31] will retry after 7.204968351s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:41:17.083819   13040 retry.go:31] will retry after 7.222087276s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:41:24.306824   13040 retry.go:31] will retry after 11.755562623s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:41:36.063217   13040 retry.go:31] will retry after 31.949400976s: Temporary Error: Get "http:": http: no Host in request URL
E0926 22:41:57.765137   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0926 22:42:08.013019   13040 retry.go:31] will retry after 17.468610776s: Temporary Error: Get "http:": http: no Host in request URL
E0926 22:42:25.467815   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-459506 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.111.193.82   10.111.193.82   80:31015/TCP   5m24s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 service --namespace=default --https --url hello-node: exit status 115 (523.97855ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30155
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-459506 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 service hello-node --url --format={{.IP}}: exit status 115 (523.495466ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-459506 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 service hello-node --url: exit status 115 (522.354942ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30155
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-459506 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30155
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (631.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.209612554s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-655811
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-655811: (1.218890349s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-655811 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-655811 status --format={{.Host}}: exit status 7 (74.91756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.728978396s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-655811 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (70.751456ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-655811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-655811
	    minikube start -p kubernetes-upgrade-655811 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6558112 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-655811 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80 (8m7.456906104s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-655811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-655811" primary control-plane node in "kubernetes-upgrade-655811" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:13:58.211196  240327 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:13:58.211268  240327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:13:58.211275  240327 out.go:374] Setting ErrFile to fd 2...
	I0926 23:13:58.211279  240327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:13:58.211483  240327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 23:13:58.211898  240327 out.go:368] Setting JSON to false
	I0926 23:13:58.212895  240327 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3373,"bootTime":1758925065,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:13:58.212976  240327 start.go:140] virtualization: kvm guest
	I0926 23:13:58.214194  240327 out.go:179] * [kubernetes-upgrade-655811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:13:58.215349  240327 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:13:58.215355  240327 notify.go:220] Checking for updates...
	I0926 23:13:58.216979  240327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:13:58.217925  240327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:13:58.218774  240327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 23:13:58.219633  240327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:13:58.220424  240327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:13:58.221602  240327 config.go:182] Loaded profile config "kubernetes-upgrade-655811": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:13:58.222227  240327 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:13:58.246054  240327 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:13:58.246178  240327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:13:58.304903  240327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-09-26 23:13:58.29245599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:13:58.305060  240327 docker.go:318] overlay module found
	I0926 23:13:58.306503  240327 out.go:179] * Using the docker driver based on existing profile
	I0926 23:13:58.307421  240327 start.go:304] selected driver: docker
	I0926 23:13:58.307436  240327 start.go:924] validating driver "docker" against &{Name:kubernetes-upgrade-655811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-655811 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:13:58.307540  240327 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:13:58.308297  240327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:13:58.370520  240327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-09-26 23:13:58.357710185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:13:58.370897  240327 cni.go:84] Creating CNI manager for ""
	I0926 23:13:58.370977  240327 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 23:13:58.371031  240327 start.go:348] cluster config:
	{Name:kubernetes-upgrade-655811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-655811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0926 23:13:58.372826  240327 out.go:179] * Starting "kubernetes-upgrade-655811" primary control-plane node in "kubernetes-upgrade-655811" cluster
	I0926 23:13:58.373828  240327 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0926 23:13:58.374869  240327 out.go:179] * Pulling base image v0.0.48 ...
	I0926 23:13:58.375683  240327 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 23:13:58.375722  240327 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0926 23:13:58.375733  240327 cache.go:58] Caching tarball of preloaded images
	I0926 23:13:58.375807  240327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 23:13:58.375840  240327 preload.go:172] Found /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 23:13:58.375855  240327 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0926 23:13:58.375955  240327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/config.json ...
	I0926 23:13:58.397528  240327 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 23:13:58.397546  240327 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 23:13:58.397561  240327 cache.go:232] Successfully downloaded all kic artifacts
	I0926 23:13:58.397593  240327 start.go:360] acquireMachinesLock for kubernetes-upgrade-655811: {Name:mka1e8386f25387a719c0433ed38b5791052ddb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:13:58.397655  240327 start.go:364] duration metric: took 38.143µs to acquireMachinesLock for "kubernetes-upgrade-655811"
	I0926 23:13:58.397681  240327 start.go:96] Skipping create...Using existing machine configuration
	I0926 23:13:58.397691  240327 fix.go:54] fixHost starting: 
	I0926 23:13:58.397937  240327 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-655811 --format={{.State.Status}}
	I0926 23:13:58.417911  240327 fix.go:112] recreateIfNeeded on kubernetes-upgrade-655811: state=Running err=<nil>
	W0926 23:13:58.417959  240327 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 23:13:58.419381  240327 out.go:252] * Updating the running docker "kubernetes-upgrade-655811" container ...
	I0926 23:13:58.419423  240327 machine.go:93] provisionDockerMachine start ...
	I0926 23:13:58.419514  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:58.437151  240327 main.go:141] libmachine: Using SSH client type: native
	I0926 23:13:58.437410  240327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I0926 23:13:58.437426  240327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:13:58.571421  240327 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-655811
	
	I0926 23:13:58.571448  240327 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-655811"
	I0926 23:13:58.571506  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:58.589801  240327 main.go:141] libmachine: Using SSH client type: native
	I0926 23:13:58.590134  240327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I0926 23:13:58.590152  240327 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-655811 && echo "kubernetes-upgrade-655811" | sudo tee /etc/hostname
	I0926 23:13:58.736678  240327 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-655811
	
	I0926 23:13:58.736760  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:58.755448  240327 main.go:141] libmachine: Using SSH client type: native
	I0926 23:13:58.755676  240327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I0926 23:13:58.755701  240327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-655811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-655811/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-655811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:13:58.893077  240327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:13:58.893112  240327 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-9508/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-9508/.minikube}
	I0926 23:13:58.893139  240327 ubuntu.go:190] setting up certificates
	I0926 23:13:58.893152  240327 provision.go:84] configureAuth start
	I0926 23:13:58.893218  240327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-655811
	I0926 23:13:58.914883  240327 provision.go:143] copyHostCerts
	I0926 23:13:58.914963  240327 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem, removing ...
	I0926 23:13:58.914986  240327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem
	I0926 23:13:58.915053  240327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem (1078 bytes)
	I0926 23:13:58.915159  240327 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem, removing ...
	I0926 23:13:58.915170  240327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem
	I0926 23:13:58.915206  240327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem (1123 bytes)
	I0926 23:13:58.915278  240327 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem, removing ...
	I0926 23:13:58.915287  240327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem
	I0926 23:13:58.915319  240327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem (1679 bytes)
	I0926 23:13:58.915383  240327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-655811 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-655811 localhost minikube]
	I0926 23:13:59.118669  240327 provision.go:177] copyRemoteCerts
	I0926 23:13:59.118719  240327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:13:59.118759  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:59.138699  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:13:59.233901  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 23:13:59.257049  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0926 23:13:59.281433  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:13:59.303956  240327 provision.go:87] duration metric: took 410.792431ms to configureAuth
	I0926 23:13:59.303978  240327 ubuntu.go:206] setting minikube options for container-runtime
	I0926 23:13:59.304153  240327 config.go:182] Loaded profile config "kubernetes-upgrade-655811": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:13:59.304165  240327 machine.go:96] duration metric: took 884.730525ms to provisionDockerMachine
	I0926 23:13:59.304173  240327 start.go:293] postStartSetup for "kubernetes-upgrade-655811" (driver="docker")
	I0926 23:13:59.304187  240327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:13:59.304245  240327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:13:59.304293  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:59.322382  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:13:59.419881  240327 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:13:59.423118  240327 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 23:13:59.423161  240327 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 23:13:59.423173  240327 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 23:13:59.423182  240327 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 23:13:59.423196  240327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/addons for local assets ...
	I0926 23:13:59.423253  240327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/files for local assets ...
	I0926 23:13:59.423354  240327 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem -> 130402.pem in /etc/ssl/certs
	I0926 23:13:59.423470  240327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:13:59.432197  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem --> /etc/ssl/certs/130402.pem (1708 bytes)
	I0926 23:13:59.457201  240327 start.go:296] duration metric: took 153.01382ms for postStartSetup
	I0926 23:13:59.457270  240327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:13:59.457315  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:59.479638  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:13:59.574209  240327 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 23:13:59.578332  240327 fix.go:56] duration metric: took 1.180639759s for fixHost
	I0926 23:13:59.578352  240327 start.go:83] releasing machines lock for "kubernetes-upgrade-655811", held for 1.180681447s
	I0926 23:13:59.578409  240327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-655811
	I0926 23:13:59.597674  240327 ssh_runner.go:195] Run: cat /version.json
	I0926 23:13:59.597730  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:59.597742  240327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:13:59.597846  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:13:59.619037  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:13:59.619487  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:13:59.808875  240327 ssh_runner.go:195] Run: systemctl --version
	I0926 23:13:59.813959  240327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 23:13:59.819365  240327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 23:13:59.839727  240327 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 23:13:59.839828  240327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:13:59.850126  240327 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0926 23:13:59.850152  240327 start.go:495] detecting cgroup driver to use...
	I0926 23:13:59.850184  240327 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 23:13:59.850235  240327 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 23:13:59.862859  240327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 23:13:59.874797  240327 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:13:59.874842  240327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:13:59.893820  240327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:13:59.906189  240327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:14:00.004505  240327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:14:00.091166  240327 docker.go:234] disabling docker service ...
	I0926 23:14:00.091243  240327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:14:00.105975  240327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:14:00.117810  240327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:14:00.204265  240327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:14:00.310034  240327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:14:00.329299  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:14:00.350610  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 23:14:00.361974  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 23:14:00.372873  240327 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 23:14:00.372932  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 23:14:00.383121  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 23:14:00.395442  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 23:14:00.405591  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 23:14:00.415313  240327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:14:00.432246  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 23:14:00.453354  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 23:14:00.467044  240327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 23:14:00.476944  240327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:14:00.484946  240327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:14:00.493124  240327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:14:00.578012  240327 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 23:14:00.691673  240327 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0926 23:14:00.691746  240327 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0926 23:14:00.695631  240327 start.go:563] Will wait 60s for crictl version
	I0926 23:14:00.695675  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:00.699021  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:14:00.731999  240327 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0926 23:14:00.732067  240327 ssh_runner.go:195] Run: containerd --version
	I0926 23:14:00.756201  240327 ssh_runner.go:195] Run: containerd --version
	I0926 23:14:00.780692  240327 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0926 23:14:00.781711  240327 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-655811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:14:00.798528  240327 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0926 23:14:00.802206  240327 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-655811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-655811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:14:00.802306  240327 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 23:14:00.802348  240327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:14:00.834464  240327 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.0". assuming images are not preloaded.
	I0926 23:14:00.834519  240327 ssh_runner.go:195] Run: which lz4
	I0926 23:14:00.838108  240327 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:14:00.841473  240327 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I0926 23:14:00.841493  240327 containerd.go:563] duration metric: took 3.423604ms to copy over tarball
	I0926 23:14:00.841542  240327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:14:03.467348  240327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.62574965s)
	I0926 23:14:03.467450  240327 kubeadm.go:909] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I0926 23:14:03.467544  240327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:14:03.509707  240327 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.0". assuming images are not preloaded.
	I0926 23:14:03.509739  240327 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.0 registry.k8s.io/kube-controller-manager:v1.34.0 registry.k8s.io/kube-scheduler:v1.34.0 registry.k8s.io/kube-proxy:v1.34.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0926 23:14:03.509824  240327 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:03.509823  240327 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0
	I0926 23:14:03.509851  240327 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0926 23:14:03.509822  240327 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:03.509880  240327 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:03.509883  240327 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:03.509865  240327 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:03.509873  240327 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:03.511253  240327 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:03.511262  240327 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:03.511258  240327 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0926 23:14:03.511279  240327 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:03.511256  240327 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:03.511251  240327 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0
	I0926 23:14:03.511327  240327 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:03.511377  240327 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:03.691967  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.0" and sha "90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90"
	I0926 23:14:03.692037  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.0
	I0926 23:14:03.696695  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.0" and sha "46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc"
	I0926 23:14:03.696792  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:03.702562  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I0926 23:14:03.702627  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:03.709187  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I0926 23:14:03.709252  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:03.722691  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.0" and sha "a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634"
	I0926 23:14:03.722769  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:03.731630  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.0" and sha "df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce"
	I0926 23:14:03.731693  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:03.748674  240327 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I0926 23:14:03.748737  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I0926 23:14:03.760177  240327 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.0" does not exist at hash "46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc" in container runtime
	I0926 23:14:03.760324  240327 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:03.760386  240327 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.0" does not exist at hash "90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90" in container runtime
	I0926 23:14:03.760404  240327 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.0
	I0926 23:14:03.760447  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.760525  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.804259  240327 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I0926 23:14:03.804394  240327 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:03.804447  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.807854  240327 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I0926 23:14:03.807893  240327 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:03.807941  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.812342  240327 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.0" does not exist at hash "a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634" in container runtime
	I0926 23:14:03.812377  240327 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:03.812434  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.812547  240327 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.0" needs transfer: "registry.k8s.io/kube-proxy:v1.34.0" does not exist at hash "df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce" in container runtime
	I0926 23:14:03.812570  240327 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:03.812595  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.817105  240327 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I0926 23:14:03.817140  240327 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I0926 23:14:03.817209  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0926 23:14:03.817258  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:03.817259  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:03.817313  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:03.817378  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:03.820963  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:03.821029  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:04.321948  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0926 23:14:04.322357  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:04.322111  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0
	I0926 23:14:04.322133  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:04.322247  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:04.322263  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:04.322524  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:04.409704  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0926 23:14:04.409722  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0926 23:14:04.409778  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I0926 23:14:04.409829  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0926 23:14:04.412981  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0926 23:14:04.413055  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0926 23:14:04.475698  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0
	I0926 23:14:04.475718  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0
	I0926 23:14:04.475811  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I0926 23:14:04.475889  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I0926 23:14:04.475908  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0
	I0926 23:14:04.938069  240327 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0926 23:14:04.938129  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:04.970371  240327 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0926 23:14:04.970416  240327 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:04.970465  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:14:04.974857  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:05.015118  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:05.055527  240327 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:05.098748  240327 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0926 23:14:05.098866  240327 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0926 23:14:05.103553  240327 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0926 23:14:05.103571  240327 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0926 23:14:05.103615  240327 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0926 23:14:05.316038  240327 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0926 23:14:05.316092  240327 cache_images.go:93] duration metric: took 1.806324237s to LoadCachedImages
	W0926 23:14:05.316152  240327 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21642-9508/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0: no such file or directory
	I0926 23:14:05.316163  240327 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0926 23:14:05.316271  240327 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-655811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-655811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 23:14:05.316325  240327 ssh_runner.go:195] Run: sudo crictl info
	I0926 23:14:05.364556  240327 cni.go:84] Creating CNI manager for ""
	I0926 23:14:05.364582  240327 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 23:14:05.364605  240327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:14:05.364632  240327 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-655811 NodeName:kubernetes-upgrade-655811 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:14:05.364787  240327 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-655811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:14:05.364863  240327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:14:05.376708  240327 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:14:05.376801  240327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:14:05.387788  240327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0926 23:14:05.412766  240327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:14:05.431393  240327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0926 23:14:05.449215  240327 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0926 23:14:05.452672  240327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:14:05.539421  240327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:14:05.555483  240327 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811 for IP: 192.168.85.2
	I0926 23:14:05.555510  240327 certs.go:195] generating shared ca certs ...
	I0926 23:14:05.555524  240327 certs.go:227] acquiring lock for ca certs: {Name:mk080975279b3a5ea38bd0bf3f7fdebf08ad146a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:14:05.555632  240327 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key
	I0926 23:14:05.555668  240327 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key
	I0926 23:14:05.555677  240327 certs.go:257] generating profile certs ...
	I0926 23:14:05.555781  240327 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.key
	I0926 23:14:05.555841  240327 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/apiserver.key.9847bf59
	I0926 23:14:05.555880  240327 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/proxy-client.key
	I0926 23:14:05.555983  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem (1338 bytes)
	W0926 23:14:05.556015  240327 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040_empty.pem, impossibly tiny 0 bytes
	I0926 23:14:05.556024  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 23:14:05.556044  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:14:05.556068  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:14:05.556102  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem (1679 bytes)
	I0926 23:14:05.556142  240327 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem (1708 bytes)
	I0926 23:14:05.556831  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:14:05.581918  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:14:05.606171  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:14:05.629988  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:14:05.657999  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0926 23:14:05.685129  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 23:14:05.709118  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:14:05.734157  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 23:14:05.780682  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem --> /usr/share/ca-certificates/13040.pem (1338 bytes)
	I0926 23:14:05.819231  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem --> /usr/share/ca-certificates/130402.pem (1708 bytes)
	I0926 23:14:05.856701  240327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:14:05.882089  240327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:14:05.900824  240327 ssh_runner.go:195] Run: openssl version
	I0926 23:14:05.906433  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130402.pem && ln -fs /usr/share/ca-certificates/130402.pem /etc/ssl/certs/130402.pem"
	I0926 23:14:05.918294  240327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130402.pem
	I0926 23:14:05.923182  240327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:35 /usr/share/ca-certificates/130402.pem
	I0926 23:14:05.923239  240327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130402.pem
	I0926 23:14:05.931840  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130402.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:14:05.944076  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:14:05.958097  240327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:14:05.964350  240327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:14:05.964446  240327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:14:05.974592  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:14:05.986385  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13040.pem && ln -fs /usr/share/ca-certificates/13040.pem /etc/ssl/certs/13040.pem"
	I0926 23:14:05.997484  240327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13040.pem
	I0926 23:14:06.001335  240327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:35 /usr/share/ca-certificates/13040.pem
	I0926 23:14:06.001382  240327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13040.pem
	I0926 23:14:06.009559  240327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13040.pem /etc/ssl/certs/51391683.0"
	I0926 23:14:06.018792  240327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:14:06.022233  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 23:14:06.029670  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 23:14:06.036140  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 23:14:06.042864  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 23:14:06.051087  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 23:14:06.060456  240327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 23:14:06.068343  240327 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-655811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-655811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:14:06.068444  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0926 23:14:06.068479  240327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:14:06.105432  240327 cri.go:89] found id: ""
	I0926 23:14:06.105503  240327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:14:06.114609  240327 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0926 23:14:06.114626  240327 kubeadm.go:597] restartPrimaryControlPlane start ...
	I0926 23:14:06.114665  240327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 23:14:06.124965  240327 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:14:06.125609  240327 kubeconfig.go:125] found "kubernetes-upgrade-655811" server: "https://192.168.85.2:8443"
	I0926 23:14:06.126885  240327 kapi.go:59] client config for kubernetes-upgrade-655811: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.key", CAFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:14:06.127451  240327 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0926 23:14:06.127473  240327 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0926 23:14:06.127481  240327 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 23:14:06.127486  240327 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0926 23:14:06.127493  240327 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 23:14:06.127878  240327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 23:14:06.138345  240327 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I0926 23:14:06.138375  240327 kubeadm.go:601] duration metric: took 23.742862ms to restartPrimaryControlPlane
	I0926 23:14:06.138385  240327 kubeadm.go:402] duration metric: took 70.050122ms to StartCluster
	I0926 23:14:06.138400  240327 settings.go:142] acquiring lock: {Name:mke935858c08b57824075e52fb45232e2555a3b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:14:06.138452  240327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:14:06.139499  240327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/kubeconfig: {Name:mka72bf89c026ab3e09a0062db4219353845dcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:14:06.139697  240327 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 23:14:06.139775  240327 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:14:06.139872  240327 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-655811"
	I0926 23:14:06.139890  240327 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-655811"
	I0926 23:14:06.139899  240327 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-655811"
	I0926 23:14:06.139928  240327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-655811"
	I0926 23:14:06.139941  240327 config.go:182] Loaded profile config "kubernetes-upgrade-655811": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	W0926 23:14:06.139902  240327 addons.go:247] addon storage-provisioner should already be in state true
	I0926 23:14:06.140018  240327 host.go:66] Checking if "kubernetes-upgrade-655811" exists ...
	I0926 23:14:06.140152  240327 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-655811 --format={{.State.Status}}
	I0926 23:14:06.140443  240327 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-655811 --format={{.State.Status}}
	I0926 23:14:06.142000  240327 out.go:179] * Verifying Kubernetes components...
	I0926 23:14:06.142988  240327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:14:06.160273  240327 kapi.go:59] client config for kubernetes-upgrade-655811: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.key", CAFile:"/home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:14:06.160546  240327 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-655811"
	W0926 23:14:06.160563  240327 addons.go:247] addon default-storageclass should already be in state true
	I0926 23:14:06.160589  240327 host.go:66] Checking if "kubernetes-upgrade-655811" exists ...
	I0926 23:14:06.160939  240327 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-655811 --format={{.State.Status}}
	I0926 23:14:06.161401  240327 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:14:06.162569  240327 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:14:06.162584  240327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:14:06.162620  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:14:06.186271  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:14:06.188373  240327 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:14:06.188490  240327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:14:06.188566  240327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-655811
	I0926 23:14:06.210510  240327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/kubernetes-upgrade-655811/id_rsa Username:docker}
	I0926 23:14:06.250570  240327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:14:06.276591  240327 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:14:06.276687  240327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:14:06.291591  240327 api_server.go:72] duration metric: took 151.854175ms to wait for apiserver process to appear ...
	I0926 23:14:06.291611  240327 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:14:06.291626  240327 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0926 23:14:06.296977  240327 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0926 23:14:06.303660  240327 api_server.go:141] control plane version: v1.34.0
	I0926 23:14:06.303688  240327 api_server.go:131] duration metric: took 12.069217ms to wait for apiserver health ...
	I0926 23:14:06.303698  240327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:14:06.310483  240327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:14:06.325225  240327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0926 23:15:06.305153  240327 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I0926 23:15:06.305194  240327 retry.go:31] will retry after 311.627068ms: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I0926 23:15:06.617033  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0926 23:15:06.617101  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0926 23:20:02.355377  240327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5m56.044846931s)
	W0926 23:20:02.355437  240327 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	serviceaccount/storage-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	I0926 23:20:02.355449  240327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5m56.030178743s)
	I0926 23:20:02.355471  240327 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (4m55.738339986s)
	I0926 23:20:02.355497  240327 cri.go:89] found id: ""
	W0926 23:20:02.355489  240327 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	I0926 23:20:02.355508  240327 logs.go:282] 0 containers: []
	W0926 23:20:02.355516  240327 logs.go:284] No container was found matching "kube-apiserver"
	I0926 23:20:02.355526  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	W0926 23:20:02.355555  240327 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	serviceaccount/storage-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	serviceaccount/storage-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	W0926 23:20:02.355577  240327 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I0926 23:20:02.355586  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0926 23:20:02.395498  240327 cri.go:89] found id: "690b7cda90238c771c650d91f7b7447529e7d8e5f2caa11cc75b84d404a35f73"
	I0926 23:20:02.395523  240327 cri.go:89] found id: ""
	I0926 23:20:02.395533  240327 logs.go:282] 1 containers: [690b7cda90238c771c650d91f7b7447529e7d8e5f2caa11cc75b84d404a35f73]
	I0926 23:20:02.395595  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:20:02.399805  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0926 23:20:02.399877  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0926 23:20:02.442061  240327 cri.go:89] found id: ""
	I0926 23:20:02.442088  240327 logs.go:282] 0 containers: []
	W0926 23:20:02.442099  240327 logs.go:284] No container was found matching "coredns"
	I0926 23:20:02.442107  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0926 23:20:02.442165  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0926 23:20:02.444221  240327 out.go:179] * Enabled addons: 
	I0926 23:20:02.485827  240327 cri.go:89] found id: "ca190779349cb50151bab6187679c5d33d29a3fa71f0da322bcdb0409666f2c7"
	I0926 23:20:02.485853  240327 cri.go:89] found id: ""
	I0926 23:20:02.485862  240327 logs.go:282] 1 containers: [ca190779349cb50151bab6187679c5d33d29a3fa71f0da322bcdb0409666f2c7]
	I0926 23:20:02.485934  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:20:02.490893  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0926 23:20:02.490961  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0926 23:20:02.504521  240327 addons.go:514] duration metric: took 5m56.364752558s for enable addons: enabled=[]
	I0926 23:20:02.526589  240327 cri.go:89] found id: ""
	I0926 23:20:02.526609  240327 logs.go:282] 0 containers: []
	W0926 23:20:02.526617  240327 logs.go:284] No container was found matching "kube-proxy"
	I0926 23:20:02.526625  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0926 23:20:02.526679  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0926 23:20:02.559373  240327 cri.go:89] found id: "9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4"
	I0926 23:20:02.559394  240327 cri.go:89] found id: ""
	I0926 23:20:02.559402  240327 logs.go:282] 1 containers: [9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4]
	I0926 23:20:02.559460  240327 ssh_runner.go:195] Run: which crictl
	I0926 23:20:02.563214  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0926 23:20:02.563271  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0926 23:20:02.597137  240327 cri.go:89] found id: ""
	I0926 23:20:02.597160  240327 logs.go:282] 0 containers: []
	W0926 23:20:02.597170  240327 logs.go:284] No container was found matching "kindnet"
	I0926 23:20:02.597180  240327 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0926 23:20:02.597235  240327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0926 23:20:02.630785  240327 cri.go:89] found id: ""
	I0926 23:20:02.630807  240327 logs.go:282] 0 containers: []
	W0926 23:20:02.630815  240327 logs.go:284] No container was found matching "storage-provisioner"
	I0926 23:20:02.630824  240327 logs.go:123] Gathering logs for kube-scheduler [ca190779349cb50151bab6187679c5d33d29a3fa71f0da322bcdb0409666f2c7] ...
	I0926 23:20:02.630836  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca190779349cb50151bab6187679c5d33d29a3fa71f0da322bcdb0409666f2c7"
	I0926 23:20:02.668532  240327 logs.go:123] Gathering logs for kube-controller-manager [9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4] ...
	I0926 23:20:02.668556  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4"
	I0926 23:20:02.704496  240327 logs.go:123] Gathering logs for containerd ...
	I0926 23:20:02.704524  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0926 23:20:02.772937  240327 logs.go:123] Gathering logs for container status ...
	I0926 23:20:02.772975  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 23:20:02.817610  240327 logs.go:123] Gathering logs for kubelet ...
	I0926 23:20:02.817646  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 23:20:02.943488  240327 logs.go:123] Gathering logs for dmesg ...
	I0926 23:20:02.943527  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 23:20:02.965462  240327 logs.go:123] Gathering logs for describe nodes ...
	I0926 23:20:02.965490  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 23:21:03.046957  240327 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.08140861s)
	W0926 23:21:03.047024  240327 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I0926 23:21:03.047042  240327 logs.go:123] Gathering logs for etcd [690b7cda90238c771c650d91f7b7447529e7d8e5f2caa11cc75b84d404a35f73] ...
	I0926 23:21:03.047061  240327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690b7cda90238c771c650d91f7b7447529e7d8e5f2caa11cc75b84d404a35f73"
	W0926 23:22:05.619179  240327 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I0926 23:22:05.620639  240327 out.go:203] 
	W0926 23:22:05.621646  240327 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W0926 23:22:05.621658  240327 out.go:285] * 
	* 
	W0926 23:22:05.623471  240327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 23:22:05.624545  240327 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-655811 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-09-26 23:22:05.650464397 +0000 UTC m=+3189.905671486
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-655811
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-655811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a",
	        "Created": "2025-09-26T23:13:15.404119592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T23:13:34.642982205Z",
	            "FinishedAt": "2025-09-26T23:13:33.340344183Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a/hosts",
	        "LogPath": "/var/lib/docker/containers/ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a/ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a-json.log",
	        "Name": "/kubernetes-upgrade-655811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-655811:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-655811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ef7e244e295b4d9223046ffb73fff2880d5f3bb4aaa95a23b24b318a2e36af3a",
	                "LowerDir": "/var/lib/docker/overlay2/8d284e6ca8f9590855ec15a3a50c7772120b7093bbadedb2e8557873e4b6a4b3-init/diff:/var/lib/docker/overlay2/9d3f38ae04ffa0ee7bbacc3f831d8e286eafea1eb3c677a38c62c87997e117c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d284e6ca8f9590855ec15a3a50c7772120b7093bbadedb2e8557873e4b6a4b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d284e6ca8f9590855ec15a3a50c7772120b7093bbadedb2e8557873e4b6a4b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d284e6ca8f9590855ec15a3a50c7772120b7093bbadedb2e8557873e4b6a4b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-655811",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-655811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-655811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-655811",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-655811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c943b58941f835ae65690fdf0896aa80ac9650be5d166d69f51a7bb4ff9df1a",
	            "SandboxKey": "/var/run/docker/netns/2c943b58941f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-655811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:ff:b6:d8:a0:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea928e46fe864375bbe44aeee02227ec855c4b28d426dd391dd9cd38a5204d38",
	                    "EndpointID": "f1e2d90e3488f8e8fa2e44c2f338e99d787ac833b2d0102650e0254490e883dc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-655811",
	                        "ef7e244e295b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-655811 -n kubernetes-upgrade-655811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-655811 -n kubernetes-upgrade-655811: exit status 2 (15.779433181s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-655811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-655811 logs -n 25: (1m0.955039229s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p enable-default-cni-708263                                                  │ enable-default-cni-708263 │ jenkins │ v1.37.0 │ 26 Sep 25 23:21 UTC │ 26 Sep 25 23:21 UTC │
	│ ssh     │ -p flannel-708263 pgrep -a kubelet                                            │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:21 UTC │ 26 Sep 25 23:21 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/nsswitch.conf                                 │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/hosts                                         │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/resolv.conf                                   │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo crictl pods                                            │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo crictl ps --all                                        │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \; │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo ip a s                                                 │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo ip r s                                                 │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo iptables-save                                          │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo iptables -t nat -L -n -v                               │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /run/flannel/subnet.env                            │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/kube-flannel/cni-conf.json                    │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	│ ssh     │ -p flannel-708263 sudo systemctl status kubelet --all --full --no-pager       │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo systemctl cat kubelet --no-pager                       │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo journalctl -xeu kubelet --all --full --no-pager        │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/kubernetes/kubelet.conf                       │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /var/lib/kubelet/config.yaml                       │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo systemctl status docker --all --full --no-pager        │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	│ ssh     │ -p flannel-708263 sudo systemctl cat docker --no-pager                        │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:22 UTC │
	│ ssh     │ -p flannel-708263 sudo cat /etc/docker/daemon.json                            │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	│ ssh     │ -p flannel-708263 sudo docker system info                                     │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	│ ssh     │ -p flannel-708263 sudo systemctl status cri-docker --all --full --no-pager    │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	│ ssh     │ -p flannel-708263 sudo systemctl cat cri-docker --no-pager                    │ flannel-708263            │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:21:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:21:18.119428  342224 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:21:18.119547  342224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:21:18.119557  342224 out.go:374] Setting ErrFile to fd 2...
	I0926 23:21:18.119563  342224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:21:18.119829  342224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 23:21:18.120341  342224 out.go:368] Setting JSON to false
	I0926 23:21:18.121546  342224 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3813,"bootTime":1758925065,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:21:18.121619  342224 start.go:140] virtualization: kvm guest
	I0926 23:21:18.123268  342224 out.go:179] * [bridge-708263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:21:18.124252  342224 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:21:18.124255  342224 notify.go:220] Checking for updates...
	I0926 23:21:18.125963  342224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:21:18.126896  342224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:21:18.127878  342224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 23:21:18.128739  342224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:21:18.129566  342224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:21:18.130721  342224 config.go:182] Loaded profile config "enable-default-cni-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:18.130838  342224 config.go:182] Loaded profile config "flannel-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:18.130922  342224 config.go:182] Loaded profile config "kubernetes-upgrade-655811": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:18.131038  342224 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:21:18.154089  342224 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:21:18.154222  342224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:21:18.210408  342224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-26 23:21:18.20029644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:21:18.210539  342224 docker.go:318] overlay module found
	I0926 23:21:18.212143  342224 out.go:179] * Using the docker driver based on user configuration
	I0926 23:21:18.213229  342224 start.go:304] selected driver: docker
	I0926 23:21:18.213245  342224 start.go:924] validating driver "docker" against <nil>
	I0926 23:21:18.213256  342224 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:21:18.213814  342224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:21:18.269444  342224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-26 23:21:18.259105461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:21:18.269597  342224 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:21:18.269830  342224 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:21:18.271284  342224 out.go:179] * Using Docker driver with root privileges
	I0926 23:21:18.272294  342224 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:21:18.272321  342224 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:21:18.272383  342224 start.go:348] cluster config:
	{Name:bridge-708263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0926 23:21:18.273554  342224 out.go:179] * Starting "bridge-708263" primary control-plane node in "bridge-708263" cluster
	I0926 23:21:18.274577  342224 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0926 23:21:18.275504  342224 out.go:179] * Pulling base image v0.0.48 ...
	I0926 23:21:18.276401  342224 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 23:21:18.276450  342224 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0926 23:21:18.276460  342224 cache.go:58] Caching tarball of preloaded images
	I0926 23:21:18.276533  342224 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 23:21:18.276556  342224 preload.go:172] Found /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 23:21:18.276567  342224 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0926 23:21:18.276686  342224 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/config.json ...
	I0926 23:21:18.276711  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/config.json: {Name:mk8458f6ea66caacd50bfc7bbb6a3f159082741f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:18.297890  342224 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 23:21:18.297909  342224 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 23:21:18.297927  342224 cache.go:232] Successfully downloaded all kic artifacts
	I0926 23:21:18.297959  342224 start.go:360] acquireMachinesLock for bridge-708263: {Name:mk11cd3e130614fb1e28389e62e581a61ebd338c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:21:18.298051  342224 start.go:364] duration metric: took 74.384µs to acquireMachinesLock for "bridge-708263"
	I0926 23:21:18.298076  342224 start.go:93] Provisioning new machine with config: &{Name:bridge-708263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 23:21:18.298164  342224 start.go:125] createHost starting for "" (driver="docker")
	W0926 23:21:16.035418  327941 pod_ready.go:104] pod "etcd-enable-default-cni-708263" is not "Ready", error: <nil>
	W0926 23:21:18.036323  327941 pod_ready.go:104] pod "etcd-enable-default-cni-708263" is not "Ready", error: <nil>
	I0926 23:21:20.035973  327941 pod_ready.go:94] pod "etcd-enable-default-cni-708263" is "Ready"
	I0926 23:21:20.036012  327941 pod_ready.go:86] duration metric: took 6.006124036s for pod "etcd-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.037874  327941 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.041839  327941 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-708263" is "Ready"
	I0926 23:21:20.041859  327941 pod_ready.go:86] duration metric: took 3.963015ms for pod "kube-apiserver-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.044107  327941 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.047694  327941 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-708263" is "Ready"
	I0926 23:21:20.047714  327941 pod_ready.go:86] duration metric: took 3.585325ms for pod "kube-controller-manager-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.049458  327941 pod_ready.go:83] waiting for pod "kube-proxy-zkp4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.234417  327941 pod_ready.go:94] pod "kube-proxy-zkp4j" is "Ready"
	I0926 23:21:20.234441  327941 pod_ready.go:86] duration metric: took 184.966226ms for pod "kube-proxy-zkp4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.434522  327941 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.834528  327941 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-708263" is "Ready"
	I0926 23:21:20.834562  327941 pod_ready.go:86] duration metric: took 400.011923ms for pod "kube-scheduler-enable-default-cni-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:20.834579  327941 pod_ready.go:40] duration metric: took 7.316405363s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:21:20.882847  327941 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:21:20.953820  327941 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-708263" cluster and "default" namespace by default
	I0926 23:21:16.384127  336025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.crt ...
	I0926 23:21:16.384155  336025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.crt: {Name:mk975305091c65eb622a962673da0b23a03a8328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:16.384358  336025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.key ...
	I0926 23:21:16.384375  336025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.key: {Name:mk34039bca319bc75f1c12ffdbc8acfd3f8f4199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:16.384569  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem (1338 bytes)
	W0926 23:21:16.384603  336025 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040_empty.pem, impossibly tiny 0 bytes
	I0926 23:21:16.384615  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 23:21:16.384640  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:21:16.384663  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:21:16.384686  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem (1679 bytes)
	I0926 23:21:16.384732  336025 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem (1708 bytes)
	I0926 23:21:16.385482  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:21:16.412198  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:21:16.436335  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:21:16.460695  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:21:16.484254  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:21:16.508105  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 23:21:16.531989  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:21:16.556032  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/flannel-708263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 23:21:16.578583  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem --> /usr/share/ca-certificates/13040.pem (1338 bytes)
	I0926 23:21:16.603544  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem --> /usr/share/ca-certificates/130402.pem (1708 bytes)
	I0926 23:21:16.626494  336025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:21:16.651818  336025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:21:16.669704  336025 ssh_runner.go:195] Run: openssl version
	I0926 23:21:16.675167  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:21:16.685338  336025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:16.689199  336025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:16.689285  336025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:16.696366  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:21:16.705823  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13040.pem && ln -fs /usr/share/ca-certificates/13040.pem /etc/ssl/certs/13040.pem"
	I0926 23:21:16.714851  336025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13040.pem
	I0926 23:21:16.718144  336025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:35 /usr/share/ca-certificates/13040.pem
	I0926 23:21:16.718189  336025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13040.pem
	I0926 23:21:16.724934  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13040.pem /etc/ssl/certs/51391683.0"
	I0926 23:21:16.733818  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130402.pem && ln -fs /usr/share/ca-certificates/130402.pem /etc/ssl/certs/130402.pem"
	I0926 23:21:16.742783  336025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130402.pem
	I0926 23:21:16.746087  336025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:35 /usr/share/ca-certificates/130402.pem
	I0926 23:21:16.746144  336025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130402.pem
	I0926 23:21:16.752585  336025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130402.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:21:16.761534  336025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:21:16.764650  336025 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:21:16.764699  336025 kubeadm.go:400] StartCluster: {Name:flannel-708263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:flannel-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:21:16.764781  336025 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0926 23:21:16.764813  336025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:21:16.797945  336025 cri.go:89] found id: ""
	I0926 23:21:16.798004  336025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:21:16.807061  336025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:21:16.815853  336025 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 23:21:16.815894  336025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:21:16.824171  336025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:21:16.824185  336025 kubeadm.go:157] found existing configuration files:
	
	I0926 23:21:16.824217  336025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:21:16.832525  336025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:21:16.832559  336025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:21:16.841030  336025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:21:16.849114  336025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:21:16.849152  336025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:21:16.857032  336025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:21:16.865454  336025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:21:16.865501  336025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:21:16.873693  336025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:21:16.882613  336025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:21:16.882659  336025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:21:16.891326  336025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 23:21:16.943627  336025 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 23:21:16.998236  336025 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:21:18.299700  342224 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0926 23:21:18.299951  342224 start.go:159] libmachine.API.Create for "bridge-708263" (driver="docker")
	I0926 23:21:18.299982  342224 client.go:168] LocalClient.Create starting
	I0926 23:21:18.300045  342224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem
	I0926 23:21:18.300082  342224 main.go:141] libmachine: Decoding PEM data...
	I0926 23:21:18.300103  342224 main.go:141] libmachine: Parsing certificate...
	I0926 23:21:18.300183  342224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem
	I0926 23:21:18.300212  342224 main.go:141] libmachine: Decoding PEM data...
	I0926 23:21:18.300237  342224 main.go:141] libmachine: Parsing certificate...
	I0926 23:21:18.300568  342224 cli_runner.go:164] Run: docker network inspect bridge-708263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 23:21:18.316586  342224 cli_runner.go:211] docker network inspect bridge-708263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 23:21:18.316641  342224 network_create.go:284] running [docker network inspect bridge-708263] to gather additional debugging logs...
	I0926 23:21:18.316660  342224 cli_runner.go:164] Run: docker network inspect bridge-708263
	W0926 23:21:18.333413  342224 cli_runner.go:211] docker network inspect bridge-708263 returned with exit code 1
	I0926 23:21:18.333436  342224 network_create.go:287] error running [docker network inspect bridge-708263]: docker network inspect bridge-708263: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-708263 not found
	I0926 23:21:18.333449  342224 network_create.go:289] output of [docker network inspect bridge-708263]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-708263 not found
	
	** /stderr **
	I0926 23:21:18.333546  342224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:21:18.349987  342224 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2261b2191090 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:5d:12:aa:39:a5} reservation:<nil>}
	I0926 23:21:18.350531  342224 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3330c100578f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:6d:20:9f:ea:0e} reservation:<nil>}
	I0926 23:21:18.351196  342224 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-083883ed3484 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:98:2b:4b:42:07} reservation:<nil>}
	I0926 23:21:18.351930  342224 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d38270}
	I0926 23:21:18.351961  342224 network_create.go:124] attempt to create docker network bridge-708263 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0926 23:21:18.352006  342224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-708263 bridge-708263
	I0926 23:21:18.414414  342224 network_create.go:108] docker network bridge-708263 192.168.76.0/24 created
	I0926 23:21:18.414441  342224 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-708263" container
	I0926 23:21:18.414489  342224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 23:21:18.431940  342224 cli_runner.go:164] Run: docker volume create bridge-708263 --label name.minikube.sigs.k8s.io=bridge-708263 --label created_by.minikube.sigs.k8s.io=true
	I0926 23:21:18.449409  342224 oci.go:103] Successfully created a docker volume bridge-708263
	I0926 23:21:18.449479  342224 cli_runner.go:164] Run: docker run --rm --name bridge-708263-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-708263 --entrypoint /usr/bin/test -v bridge-708263:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 23:21:18.815453  342224 oci.go:107] Successfully prepared a docker volume bridge-708263
	I0926 23:21:18.815500  342224 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 23:21:18.815520  342224 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 23:21:18.815583  342224 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-708263:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 23:21:23.097664  342224 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-708263:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.282020139s)
	I0926 23:21:23.097713  342224 kic.go:203] duration metric: took 4.282187255s to extract preloaded images to volume ...
	W0926 23:21:23.097869  342224 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 23:21:23.097920  342224 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 23:21:23.097966  342224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 23:21:23.169140  342224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-708263 --name bridge-708263 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-708263 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-708263 --network bridge-708263 --ip 192.168.76.2 --volume bridge-708263:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 23:21:23.484406  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Running}}
	I0926 23:21:23.505924  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:23.527454  342224 cli_runner.go:164] Run: docker exec bridge-708263 stat /var/lib/dpkg/alternatives/iptables
	I0926 23:21:23.582364  342224 oci.go:144] the created container "bridge-708263" has a running status.
	I0926 23:21:23.582398  342224 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa...
	I0926 23:21:24.133566  342224 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 23:21:24.167883  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:24.192225  342224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 23:21:24.192252  342224 kic_runner.go:114] Args: [docker exec --privileged bridge-708263 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 23:21:24.252684  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:24.277137  342224 machine.go:93] provisionDockerMachine start ...
	I0926 23:21:24.277284  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:24.301292  342224 main.go:141] libmachine: Using SSH client type: native
	I0926 23:21:24.301619  342224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0926 23:21:24.301641  342224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:21:24.453609  342224 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-708263
	
	I0926 23:21:24.453642  342224 ubuntu.go:182] provisioning hostname "bridge-708263"
	I0926 23:21:24.453718  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:24.477820  342224 main.go:141] libmachine: Using SSH client type: native
	I0926 23:21:24.478182  342224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0926 23:21:24.478204  342224 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-708263 && echo "bridge-708263" | sudo tee /etc/hostname
	I0926 23:21:24.650349  342224 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-708263
	
	I0926 23:21:24.650419  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:24.679360  342224 main.go:141] libmachine: Using SSH client type: native
	I0926 23:21:24.679677  342224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0926 23:21:24.679701  342224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-708263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-708263/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-708263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:21:24.834229  342224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:21:24.834260  342224 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-9508/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-9508/.minikube}
	I0926 23:21:24.834289  342224 ubuntu.go:190] setting up certificates
	I0926 23:21:24.834306  342224 provision.go:84] configureAuth start
	I0926 23:21:24.834373  342224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-708263
	I0926 23:21:24.856765  342224 provision.go:143] copyHostCerts
	I0926 23:21:24.856821  342224 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem, removing ...
	I0926 23:21:24.856834  342224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem
	I0926 23:21:24.856905  342224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/ca.pem (1078 bytes)
	I0926 23:21:24.857040  342224 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem, removing ...
	I0926 23:21:24.857057  342224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem
	I0926 23:21:24.857108  342224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/cert.pem (1123 bytes)
	I0926 23:21:24.857187  342224 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem, removing ...
	I0926 23:21:24.857195  342224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem
	I0926 23:21:24.857224  342224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-9508/.minikube/key.pem (1679 bytes)
	I0926 23:21:24.857324  342224 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem org=jenkins.bridge-708263 san=[127.0.0.1 192.168.76.2 bridge-708263 localhost minikube]
	I0926 23:21:25.380255  342224 provision.go:177] copyRemoteCerts
	I0926 23:21:25.380311  342224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:21:25.380343  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:25.401592  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:25.504227  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 23:21:25.531110  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:21:25.555732  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:21:25.580992  342224 provision.go:87] duration metric: took 746.671933ms to configureAuth
	I0926 23:21:25.581025  342224 ubuntu.go:206] setting minikube options for container-runtime
	I0926 23:21:25.581208  342224 config.go:182] Loaded profile config "bridge-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:25.581222  342224 machine.go:96] duration metric: took 1.30406524s to provisionDockerMachine
	I0926 23:21:25.581230  342224 client.go:171] duration metric: took 7.281241277s to LocalClient.Create
	I0926 23:21:25.581255  342224 start.go:167] duration metric: took 7.281301038s to libmachine.API.Create "bridge-708263"
	I0926 23:21:25.581265  342224 start.go:293] postStartSetup for "bridge-708263" (driver="docker")
	I0926 23:21:25.581285  342224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:21:25.581335  342224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:21:25.581379  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:25.598859  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:25.696727  342224 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:21:25.700073  342224 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 23:21:25.700101  342224 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 23:21:25.700109  342224 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 23:21:25.700115  342224 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 23:21:25.700124  342224 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/addons for local assets ...
	I0926 23:21:25.700169  342224 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-9508/.minikube/files for local assets ...
	I0926 23:21:25.700253  342224 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem -> 130402.pem in /etc/ssl/certs
	I0926 23:21:25.700350  342224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:21:25.709551  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem --> /etc/ssl/certs/130402.pem (1708 bytes)
	I0926 23:21:25.736059  342224 start.go:296] duration metric: took 154.781716ms for postStartSetup
	I0926 23:21:25.736375  342224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-708263
	I0926 23:21:25.753602  342224 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/config.json ...
	I0926 23:21:25.753863  342224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:21:25.753899  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:25.771808  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:25.863643  342224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 23:21:25.868099  342224 start.go:128] duration metric: took 7.569920723s to createHost
	I0926 23:21:25.868122  342224 start.go:83] releasing machines lock for "bridge-708263", held for 7.570059746s
	I0926 23:21:25.868184  342224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-708263
	I0926 23:21:25.886404  342224 ssh_runner.go:195] Run: cat /version.json
	I0926 23:21:25.886450  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:25.886462  342224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:21:25.886514  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:25.905652  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:25.905915  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:26.083119  342224 ssh_runner.go:195] Run: systemctl --version
	I0926 23:21:26.088092  342224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 23:21:26.092544  342224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 23:21:26.121996  342224 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 23:21:26.122070  342224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:21:26.149025  342224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 23:21:26.149046  342224 start.go:495] detecting cgroup driver to use...
	I0926 23:21:26.149080  342224 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 23:21:26.149122  342224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 23:21:26.161485  342224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 23:21:26.172781  342224 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:21:26.172835  342224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:21:26.189829  342224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:21:26.205385  342224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:21:26.285721  342224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:21:26.372829  342224 docker.go:234] disabling docker service ...
	I0926 23:21:26.372896  342224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:21:26.393894  342224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:21:26.410166  342224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:21:26.494643  342224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:21:26.574001  342224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:21:26.588075  342224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:21:26.609100  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 23:21:26.622442  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 23:21:26.634388  342224 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 23:21:26.634443  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 23:21:26.646813  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 23:21:26.659315  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 23:21:26.671154  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 23:21:26.684201  342224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:21:26.695478  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 23:21:26.707802  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 23:21:26.719334  342224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 23:21:26.731518  342224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:21:26.740972  342224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:21:26.749568  342224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:21:26.818223  342224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 23:21:26.914607  342224 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0926 23:21:26.914683  342224 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0926 23:21:26.918579  342224 start.go:563] Will wait 60s for crictl version
	I0926 23:21:26.918640  342224 ssh_runner.go:195] Run: which crictl
	I0926 23:21:26.922118  342224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:21:26.957056  342224 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0926 23:21:26.957126  342224 ssh_runner.go:195] Run: containerd --version
	I0926 23:21:26.985200  342224 ssh_runner.go:195] Run: containerd --version
	I0926 23:21:27.014325  342224 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0926 23:21:27.015399  342224 cli_runner.go:164] Run: docker network inspect bridge-708263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:21:27.034783  342224 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0926 23:21:27.038890  342224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:21:27.050566  342224 kubeadm.go:883] updating cluster {Name:bridge-708263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:21:27.050662  342224 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 23:21:27.050705  342224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:21:27.083090  342224 containerd.go:627] all images are preloaded for containerd runtime.
	I0926 23:21:27.083113  342224 containerd.go:534] Images already preloaded, skipping extraction
	I0926 23:21:27.083170  342224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:21:27.116377  342224 containerd.go:627] all images are preloaded for containerd runtime.
	I0926 23:21:27.116401  342224 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:21:27.116411  342224 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0926 23:21:27.116513  342224 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-708263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0926 23:21:27.116572  342224 ssh_runner.go:195] Run: sudo crictl info
	I0926 23:21:27.150575  342224 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:21:27.150606  342224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:21:27.150632  342224 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-708263 NodeName:bridge-708263 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:21:27.150825  342224 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-708263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:21:27.150896  342224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:21:27.160361  342224 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:21:27.160422  342224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:21:27.169330  342224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0926 23:21:27.187530  342224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:21:27.209501  342224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0926 23:21:27.231894  342224 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0926 23:21:27.236150  342224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:21:27.249420  342224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:21:27.315273  342224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:21:27.340235  342224 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263 for IP: 192.168.76.2
	I0926 23:21:27.340255  342224 certs.go:195] generating shared ca certs ...
	I0926 23:21:27.340275  342224 certs.go:227] acquiring lock for ca certs: {Name:mk080975279b3a5ea38bd0bf3f7fdebf08ad146a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.340409  342224 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key
	I0926 23:21:27.340463  342224 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key
	I0926 23:21:27.340478  342224 certs.go:257] generating profile certs ...
	I0926 23:21:27.340539  342224 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.key
	I0926 23:21:27.340555  342224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.crt with IP's: []
	I0926 23:21:27.551857  342224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.crt ...
	I0926 23:21:27.551893  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.crt: {Name:mk744b515f6632437fb30150ceed3e09d671b7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.552099  342224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.key ...
	I0926 23:21:27.552112  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/client.key: {Name:mk77ed77aa99702cf02c9d4d06da2517d6549b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.552205  342224 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key.bec936ed
	I0926 23:21:27.552221  342224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt.bec936ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0926 23:21:27.886262  342224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt.bec936ed ...
	I0926 23:21:27.886290  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt.bec936ed: {Name:mk6cf3425f7527cd79b988f5dd49d079d3667174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.886472  342224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key.bec936ed ...
	I0926 23:21:27.886490  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key.bec936ed: {Name:mk56f10cb339402e9a807cf8140d54f1586b5388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.886603  342224 certs.go:382] copying /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt.bec936ed -> /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt
	I0926 23:21:27.886718  342224 certs.go:386] copying /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key.bec936ed -> /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key
	I0926 23:21:27.886827  342224 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.key
	I0926 23:21:27.886849  342224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.crt with IP's: []
	I0926 23:21:27.920468  342224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.crt ...
	I0926 23:21:27.920493  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.crt: {Name:mk2aa1447adac4c23f8d631bd7c733abf0c840ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.920634  342224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.key ...
	I0926 23:21:27.920645  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.key: {Name:mkc172d1b993ed8e3c639c40820d509fc91dcf1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:27.920835  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem (1338 bytes)
	W0926 23:21:27.920867  342224 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040_empty.pem, impossibly tiny 0 bytes
	I0926 23:21:27.920877  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 23:21:27.920905  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:21:27.920928  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:21:27.920948  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/certs/key.pem (1679 bytes)
	I0926 23:21:27.920993  342224 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem (1708 bytes)
	I0926 23:21:27.921626  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:21:27.948011  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:21:27.976130  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:21:28.000637  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:21:28.025586  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:21:28.051071  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:21:28.077379  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:21:28.102507  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/bridge-708263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:21:28.355934  336025 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:21:28.355985  336025 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:21:28.356053  336025 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 23:21:28.356151  336025 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 23:21:28.356228  336025 kubeadm.go:318] OS: Linux
	I0926 23:21:28.356290  336025 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 23:21:28.356361  336025 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 23:21:28.356431  336025 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 23:21:28.356518  336025 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 23:21:28.356569  336025 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 23:21:28.356619  336025 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 23:21:28.356668  336025 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 23:21:28.356726  336025 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 23:21:28.356839  336025 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:21:28.356927  336025 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:21:28.357025  336025 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:21:28.357117  336025 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:21:28.358907  336025 out.go:252]   - Generating certificates and keys ...
	I0926 23:21:28.359026  336025 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:21:28.359142  336025 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:21:28.359251  336025 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:21:28.359328  336025 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:21:28.359406  336025 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:21:28.359471  336025 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:21:28.359540  336025 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:21:28.359686  336025 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-708263 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0926 23:21:28.359784  336025 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:21:28.359947  336025 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-708263 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0926 23:21:28.360049  336025 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:21:28.360144  336025 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:21:28.360209  336025 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:21:28.360305  336025 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:21:28.360391  336025 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:21:28.360463  336025 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:21:28.360528  336025 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:21:28.360632  336025 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:21:28.360707  336025 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:21:28.360859  336025 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:21:28.360956  336025 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:21:28.362032  336025 out.go:252]   - Booting up control plane ...
	I0926 23:21:28.362161  336025 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:21:28.362285  336025 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:21:28.362392  336025 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:21:28.362556  336025 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:21:28.362692  336025 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:21:28.362849  336025 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:21:28.362991  336025 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:21:28.363062  336025 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:21:28.363249  336025 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:21:28.363390  336025 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:21:28.363458  336025 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501635029s
	I0926 23:21:28.363562  336025 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:21:28.363675  336025 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0926 23:21:28.363801  336025 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:21:28.363910  336025 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:21:28.364026  336025 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.100422831s
	I0926 23:21:28.364155  336025 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.299587114s
	I0926 23:21:28.364259  336025 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001887976s
	I0926 23:21:28.364407  336025 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:21:28.364518  336025 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:21:28.364569  336025 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:21:28.364778  336025 kubeadm.go:318] [mark-control-plane] Marking the node flannel-708263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:21:28.364838  336025 kubeadm.go:318] [bootstrap-token] Using token: xt3b73.pw1ggu77rgb5cz3x
	I0926 23:21:28.366542  336025 out.go:252]   - Configuring RBAC rules ...
	I0926 23:21:28.366657  336025 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:21:28.366791  336025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:21:28.366994  336025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:21:28.367180  336025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:21:28.367313  336025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:21:28.367426  336025 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:21:28.367576  336025 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:21:28.367654  336025 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:21:28.367719  336025 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:21:28.367728  336025 kubeadm.go:318] 
	I0926 23:21:28.367831  336025 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:21:28.367838  336025 kubeadm.go:318] 
	I0926 23:21:28.367900  336025 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:21:28.367906  336025 kubeadm.go:318] 
	I0926 23:21:28.367947  336025 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:21:28.368042  336025 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:21:28.368104  336025 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:21:28.368113  336025 kubeadm.go:318] 
	I0926 23:21:28.368158  336025 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:21:28.368166  336025 kubeadm.go:318] 
	I0926 23:21:28.368217  336025 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:21:28.368225  336025 kubeadm.go:318] 
	I0926 23:21:28.368281  336025 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:21:28.368363  336025 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:21:28.368430  336025 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:21:28.368440  336025 kubeadm.go:318] 
	I0926 23:21:28.368511  336025 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:21:28.368633  336025 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:21:28.368646  336025 kubeadm.go:318] 
	I0926 23:21:28.368770  336025 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xt3b73.pw1ggu77rgb5cz3x \
	I0926 23:21:28.368900  336025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 \
	I0926 23:21:28.368922  336025 kubeadm.go:318] 	--control-plane 
	I0926 23:21:28.368925  336025 kubeadm.go:318] 
	I0926 23:21:28.369028  336025 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:21:28.369039  336025 kubeadm.go:318] 
	I0926 23:21:28.369151  336025 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xt3b73.pw1ggu77rgb5cz3x \
	I0926 23:21:28.369281  336025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 
	I0926 23:21:28.369297  336025 cni.go:84] Creating CNI manager for "flannel"
	I0926 23:21:28.373856  336025 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I0926 23:21:28.129840  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:21:28.159394  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/certs/13040.pem --> /usr/share/ca-certificates/13040.pem (1338 bytes)
	I0926 23:21:28.183929  342224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/ssl/certs/130402.pem --> /usr/share/ca-certificates/130402.pem (1708 bytes)
	I0926 23:21:28.209612  342224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:21:28.228446  342224 ssh_runner.go:195] Run: openssl version
	I0926 23:21:28.234293  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:21:28.244036  342224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:28.247629  342224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:28.247675  342224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:21:28.254517  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:21:28.264376  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13040.pem && ln -fs /usr/share/ca-certificates/13040.pem /etc/ssl/certs/13040.pem"
	I0926 23:21:28.274059  342224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13040.pem
	I0926 23:21:28.277590  342224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:35 /usr/share/ca-certificates/13040.pem
	I0926 23:21:28.277641  342224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13040.pem
	I0926 23:21:28.284512  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13040.pem /etc/ssl/certs/51391683.0"
	I0926 23:21:28.294592  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130402.pem && ln -fs /usr/share/ca-certificates/130402.pem /etc/ssl/certs/130402.pem"
	I0926 23:21:28.305048  342224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130402.pem
	I0926 23:21:28.309265  342224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:35 /usr/share/ca-certificates/130402.pem
	I0926 23:21:28.309304  342224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130402.pem
	I0926 23:21:28.315882  342224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130402.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:21:28.325471  342224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:21:28.328895  342224 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:21:28.328957  342224 kubeadm.go:400] StartCluster: {Name:bridge-708263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-708263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:21:28.329019  342224 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0926 23:21:28.329055  342224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:21:28.366514  342224 cri.go:89] found id: ""
	I0926 23:21:28.366579  342224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:21:28.376347  342224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:21:28.386698  342224 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 23:21:28.386778  342224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:21:28.395922  342224 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:21:28.395952  342224 kubeadm.go:157] found existing configuration files:
	
	I0926 23:21:28.396001  342224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:21:28.404921  342224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:21:28.404971  342224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:21:28.414268  342224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:21:28.423470  342224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:21:28.423514  342224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:21:28.432963  342224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:21:28.442820  342224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:21:28.442872  342224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:21:28.451989  342224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:21:28.462798  342224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:21:28.462853  342224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:21:28.473299  342224 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 23:21:28.520283  342224 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:21:28.520351  342224 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:21:28.537120  342224 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 23:21:28.537238  342224 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 23:21:28.537312  342224 kubeadm.go:318] OS: Linux
	I0926 23:21:28.537394  342224 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 23:21:28.537467  342224 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 23:21:28.537538  342224 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 23:21:28.537618  342224 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 23:21:28.537694  342224 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 23:21:28.537785  342224 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 23:21:28.537862  342224 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 23:21:28.537922  342224 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 23:21:28.606137  342224 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:21:28.606664  342224 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:21:28.606829  342224 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:21:28.613494  342224 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:21:28.374881  336025 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 23:21:28.379040  336025 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 23:21:28.379058  336025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I0926 23:21:28.399020  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 23:21:28.742971  336025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:21:28.743039  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:28.743127  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-708263 minikube.k8s.io/updated_at=2025_09_26T23_21_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=flannel-708263 minikube.k8s.io/primary=true
	I0926 23:21:28.816870  336025 ops.go:34] apiserver oom_adj: -16
	I0926 23:21:28.816981  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:29.317120  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:29.817965  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:30.317303  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:30.817776  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:28.614999  342224 out.go:252]   - Generating certificates and keys ...
	I0926 23:21:28.615106  342224 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:21:28.615951  342224 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:21:28.793136  342224 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:21:28.848951  342224 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:21:28.993606  342224 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:21:29.241215  342224 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:21:29.325079  342224 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:21:29.325240  342224 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-708263 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:21:29.495345  342224 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:21:29.495512  342224 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-708263 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:21:30.140206  342224 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:21:30.259554  342224 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:21:30.493272  342224 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:21:30.493364  342224 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:21:30.599121  342224 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:21:31.011209  342224 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:21:31.236096  342224 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:21:31.601358  342224 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:21:31.800298  342224 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:21:31.800915  342224 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:21:31.804699  342224 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:21:31.806159  342224 out.go:252]   - Booting up control plane ...
	I0926 23:21:31.806265  342224 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:21:31.806369  342224 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:21:31.807407  342224 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:21:31.834526  342224 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:21:31.834674  342224 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:21:31.844086  342224 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:21:31.844434  342224 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:21:31.844496  342224 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:21:31.942510  342224 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:21:31.942689  342224 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:21:31.317155  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:31.817043  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:32.317286  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:32.817953  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:33.317476  336025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:33.388411  336025 kubeadm.go:1113] duration metric: took 4.645427379s to wait for elevateKubeSystemPrivileges
	I0926 23:21:33.388448  336025 kubeadm.go:402] duration metric: took 16.623752058s to StartCluster
	I0926 23:21:33.388467  336025 settings.go:142] acquiring lock: {Name:mke935858c08b57824075e52fb45232e2555a3b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:33.388537  336025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:21:33.390306  336025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/kubeconfig: {Name:mka72bf89c026ab3e09a0062db4219353845dcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:33.390570  336025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:21:33.390584  336025 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 23:21:33.390660  336025 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:21:33.390785  336025 addons.go:69] Setting storage-provisioner=true in profile "flannel-708263"
	I0926 23:21:33.390814  336025 addons.go:69] Setting default-storageclass=true in profile "flannel-708263"
	I0926 23:21:33.390879  336025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-708263"
	I0926 23:21:33.390791  336025 config.go:182] Loaded profile config "flannel-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:33.390842  336025 addons.go:238] Setting addon storage-provisioner=true in "flannel-708263"
	I0926 23:21:33.391048  336025 host.go:66] Checking if "flannel-708263" exists ...
	I0926 23:21:33.391307  336025 cli_runner.go:164] Run: docker container inspect flannel-708263 --format={{.State.Status}}
	I0926 23:21:33.391431  336025 cli_runner.go:164] Run: docker container inspect flannel-708263 --format={{.State.Status}}
	I0926 23:21:33.392594  336025 out.go:179] * Verifying Kubernetes components...
	I0926 23:21:33.393698  336025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:21:33.415622  336025 addons.go:238] Setting addon default-storageclass=true in "flannel-708263"
	I0926 23:21:33.415670  336025 host.go:66] Checking if "flannel-708263" exists ...
	I0926 23:21:33.415731  336025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:21:33.416175  336025 cli_runner.go:164] Run: docker container inspect flannel-708263 --format={{.State.Status}}
	I0926 23:21:33.417171  336025 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:21:33.417190  336025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:21:33.417255  336025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-708263
	I0926 23:21:33.448844  336025 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:21:33.448923  336025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:21:33.449008  336025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-708263
	I0926 23:21:33.454162  336025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/flannel-708263/id_rsa Username:docker}
	I0926 23:21:33.475915  336025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/flannel-708263/id_rsa Username:docker}
	I0926 23:21:33.499790  336025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:21:33.528688  336025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:21:33.576723  336025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:21:33.588740  336025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:21:33.704292  336025 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0926 23:21:33.705945  336025 node_ready.go:35] waiting up to 15m0s for node "flannel-708263" to be "Ready" ...
	I0926 23:21:33.731072  336025 node_ready.go:49] node "flannel-708263" is "Ready"
	I0926 23:21:33.731105  336025 node_ready.go:38] duration metric: took 25.118596ms for node "flannel-708263" to be "Ready" ...
	I0926 23:21:33.731121  336025 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:21:33.731170  336025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:21:34.073853  336025 api_server.go:72] duration metric: took 683.22127ms to wait for apiserver process to appear ...
	I0926 23:21:34.073881  336025 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:21:34.073901  336025 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0926 23:21:34.081061  336025 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0926 23:21:34.082489  336025 api_server.go:141] control plane version: v1.34.0
	I0926 23:21:34.082515  336025 api_server.go:131] duration metric: took 8.62657ms to wait for apiserver health ...
	I0926 23:21:34.082525  336025 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:21:34.084700  336025 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 23:21:34.086295  336025 addons.go:514] duration metric: took 695.61461ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 23:21:34.088151  336025 system_pods.go:59] 8 kube-system pods found
	I0926 23:21:34.088181  336025 system_pods.go:61] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.088191  336025 system_pods.go:61] "coredns-66bc5c9577-hrthd" [26808758-f9f8-45c0-905e-372ef8bae62e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.088209  336025 system_pods.go:61] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:34.088219  336025 system_pods.go:61] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:34.088227  336025 system_pods.go:61] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:34.088235  336025 system_pods.go:61] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:34.088241  336025 system_pods.go:61] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:34.088248  336025 system_pods.go:61] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:34.088253  336025 system_pods.go:74] duration metric: took 5.721996ms to wait for pod list to return data ...
	I0926 23:21:34.088261  336025 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:21:34.090333  336025 default_sa.go:45] found service account: "default"
	I0926 23:21:34.090355  336025 default_sa.go:55] duration metric: took 2.08748ms for default service account to be created ...
	I0926 23:21:34.090365  336025 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:21:34.092843  336025 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:34.092870  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.092877  336025 system_pods.go:89] "coredns-66bc5c9577-hrthd" [26808758-f9f8-45c0-905e-372ef8bae62e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.092892  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:34.092898  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:34.092908  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:34.092914  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:34.092922  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:34.092926  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:34.092944  336025 retry.go:31] will retry after 293.904512ms: missing components: kube-dns, kube-proxy
	I0926 23:21:34.210276  336025 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-708263" context rescaled to 1 replicas
	I0926 23:21:34.392308  336025 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:34.392341  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.392352  336025 system_pods.go:89] "coredns-66bc5c9577-hrthd" [26808758-f9f8-45c0-905e-372ef8bae62e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.392363  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:34.392373  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:34.392384  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:34.392397  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:34.392408  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:34.392413  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:34.392429  336025 retry.go:31] will retry after 300.397495ms: missing components: kube-dns, kube-proxy
	I0926 23:21:34.696976  336025 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:34.697013  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.697024  336025 system_pods.go:89] "coredns-66bc5c9577-hrthd" [26808758-f9f8-45c0-905e-372ef8bae62e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:34.697035  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:34.697045  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:34.697056  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:34.697066  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:34.697079  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:34.697088  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:34.697107  336025 retry.go:31] will retry after 470.966999ms: missing components: kube-dns
	I0926 23:21:35.172353  336025 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:35.172393  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:35.172404  336025 system_pods.go:89] "coredns-66bc5c9577-hrthd" [26808758-f9f8-45c0-905e-372ef8bae62e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:35.172413  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:35.172423  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:35.172433  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:35.172445  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:35.172455  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:35.172460  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:35.172477  336025 retry.go:31] will retry after 596.791258ms: missing components: kube-dns
	I0926 23:21:35.773702  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:35.773786  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:35.773799  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:35.773809  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:35.773817  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:35.773824  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:35.773831  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:35.773836  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:35.773854  336025 retry.go:31] will retry after 559.874241ms: missing components: kube-dns
	I0926 23:21:33.446124  342224 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502074591s
	I0926 23:21:33.450613  342224 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:21:33.450731  342224 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0926 23:21:33.450949  342224 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:21:33.451055  342224 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:21:35.513866  342224 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.063423987s
	I0926 23:21:35.911585  342224 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.461537784s
	I0926 23:21:37.452304  342224 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002010659s
	I0926 23:21:37.464871  342224 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:21:37.475697  342224 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:21:37.485568  342224 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:21:37.485875  342224 kubeadm.go:318] [mark-control-plane] Marking the node bridge-708263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:21:37.496887  342224 kubeadm.go:318] [bootstrap-token] Using token: yldfos.3192i55f90vwthpq
	I0926 23:21:37.498241  342224 out.go:252]   - Configuring RBAC rules ...
	I0926 23:21:37.498416  342224 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:21:37.503224  342224 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:21:37.509428  342224 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:21:37.519731  342224 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:21:37.522987  342224 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:21:37.528248  342224 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:21:37.860097  342224 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:21:38.280433  342224 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:21:38.859105  342224 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:21:38.860288  342224 kubeadm.go:318] 
	I0926 23:21:38.860388  342224 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:21:38.860406  342224 kubeadm.go:318] 
	I0926 23:21:38.860545  342224 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:21:38.860563  342224 kubeadm.go:318] 
	I0926 23:21:38.860614  342224 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:21:38.860711  342224 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:21:38.860809  342224 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:21:38.860819  342224 kubeadm.go:318] 
	I0926 23:21:38.860909  342224 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:21:38.860927  342224 kubeadm.go:318] 
	I0926 23:21:38.860992  342224 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:21:38.860997  342224 kubeadm.go:318] 
	I0926 23:21:38.861068  342224 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:21:38.861269  342224 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:21:38.861374  342224 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:21:38.861383  342224 kubeadm.go:318] 
	I0926 23:21:38.861528  342224 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:21:38.861640  342224 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:21:38.861650  342224 kubeadm.go:318] 
	I0926 23:21:38.861798  342224 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yldfos.3192i55f90vwthpq \
	I0926 23:21:38.861953  342224 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 \
	I0926 23:21:38.861984  342224 kubeadm.go:318] 	--control-plane 
	I0926 23:21:38.861991  342224 kubeadm.go:318] 
	I0926 23:21:38.862125  342224 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:21:38.862135  342224 kubeadm.go:318] 
	I0926 23:21:38.862212  342224 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yldfos.3192i55f90vwthpq \
	I0926 23:21:38.862304  342224 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1dbeb716d602e0941682b86f7d46c5a496a37728672c82fc41605cb6bf1292a7 
	I0926 23:21:38.865982  342224 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 23:21:38.866140  342224 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:21:38.866166  342224 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:21:38.867643  342224 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:21:36.337975  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:36.338008  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:36.338016  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:36.338028  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:36.338037  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:36.338042  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:36.338046  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:36.338049  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:36.338063  336025 retry.go:31] will retry after 667.663397ms: missing components: kube-dns
	I0926 23:21:37.010434  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:37.010473  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:37.010484  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:37.010495  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:37.010503  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:37.010509  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:37.010517  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:37.010523  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:37.010541  336025 retry.go:31] will retry after 849.396238ms: missing components: kube-dns
	I0926 23:21:37.865481  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:37.865519  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:37.865538  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:37.866058  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:37.866083  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:37.866091  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:37.866104  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:21:37.866119  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:37.866147  336025 retry.go:31] will retry after 1.294778011s: missing components: kube-dns
	I0926 23:21:39.165475  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:39.165509  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:39.165517  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:39.165524  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:39.165528  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:39.165534  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:39.165538  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:39.165545  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:39.165561  336025 retry.go:31] will retry after 1.835922784s: missing components: kube-dns
	I0926 23:21:41.005582  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:41.005619  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:41.005628  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running
	I0926 23:21:41.005639  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:41.005645  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:41.005653  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:41.005658  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:41.005662  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:41.005679  336025 retry.go:31] will retry after 2.095194722s: missing components: kube-dns
	I0926 23:21:38.868703  342224 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:21:38.879975  342224 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:21:38.901034  342224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:21:38.901164  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:38.901263  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-708263 minikube.k8s.io/updated_at=2025_09_26T23_21_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=bridge-708263 minikube.k8s.io/primary=true
	I0926 23:21:38.911444  342224 ops.go:34] apiserver oom_adj: -16
	I0926 23:21:38.985119  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:39.485976  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:39.985798  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:40.485969  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:40.986086  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:41.485345  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:41.985882  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:42.486138  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:42.985907  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:43.485273  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:43.985667  342224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:21:44.058862  342224 kubeadm.go:1113] duration metric: took 5.157737454s to wait for elevateKubeSystemPrivileges
	I0926 23:21:44.058894  342224 kubeadm.go:402] duration metric: took 15.729944161s to StartCluster
	I0926 23:21:44.058912  342224 settings.go:142] acquiring lock: {Name:mke935858c08b57824075e52fb45232e2555a3b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:44.058973  342224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:21:44.060812  342224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-9508/kubeconfig: {Name:mka72bf89c026ab3e09a0062db4219353845dcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:21:44.061108  342224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:21:44.061126  342224 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0926 23:21:44.061324  342224 config.go:182] Loaded profile config "bridge-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:21:44.061248  342224 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:21:44.061398  342224 addons.go:69] Setting storage-provisioner=true in profile "bridge-708263"
	I0926 23:21:44.061412  342224 addons.go:69] Setting default-storageclass=true in profile "bridge-708263"
	I0926 23:21:44.061431  342224 addons.go:238] Setting addon storage-provisioner=true in "bridge-708263"
	I0926 23:21:44.061442  342224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-708263"
	I0926 23:21:44.061463  342224 host.go:66] Checking if "bridge-708263" exists ...
	I0926 23:21:44.061897  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:44.062276  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:44.068889  342224 out.go:179] * Verifying Kubernetes components...
	I0926 23:21:44.070160  342224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:21:44.101778  342224 addons.go:238] Setting addon default-storageclass=true in "bridge-708263"
	I0926 23:21:44.101840  342224 host.go:66] Checking if "bridge-708263" exists ...
	I0926 23:21:44.102358  342224 cli_runner.go:164] Run: docker container inspect bridge-708263 --format={{.State.Status}}
	I0926 23:21:44.104326  342224 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:21:44.108775  342224 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:21:44.108931  342224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:21:44.109196  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:44.140886  342224 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:21:44.141123  342224 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:21:44.141296  342224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-708263
	I0926 23:21:44.154335  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:44.183390  342224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/bridge-708263/id_rsa Username:docker}
	I0926 23:21:44.204063  342224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:21:44.252513  342224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:21:44.300963  342224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:21:44.319001  342224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:21:44.493465  342224 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0926 23:21:44.497112  342224 node_ready.go:35] waiting up to 15m0s for node "bridge-708263" to be "Ready" ...
	I0926 23:21:44.510039  342224 node_ready.go:49] node "bridge-708263" is "Ready"
	I0926 23:21:44.510069  342224 node_ready.go:38] duration metric: took 12.914423ms for node "bridge-708263" to be "Ready" ...
	I0926 23:21:44.510086  342224 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:21:44.510148  342224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:21:44.764251  342224 api_server.go:72] duration metric: took 703.092431ms to wait for apiserver process to appear ...
	I0926 23:21:44.764274  342224 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:21:44.764289  342224 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0926 23:21:44.771409  342224 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0926 23:21:44.772207  342224 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 23:21:43.105630  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:43.105666  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:43.105674  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running
	I0926 23:21:43.105682  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running
	I0926 23:21:43.105689  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:43.105694  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:43.105699  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:43.105705  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:43.105721  336025 retry.go:31] will retry after 1.814441727s: missing components: kube-dns
	I0926 23:21:44.924720  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:44.924792  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:44.924803  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running
	I0926 23:21:44.924812  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running
	I0926 23:21:44.924819  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:44.924826  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:44.924868  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:44.924874  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:44.924895  336025 retry.go:31] will retry after 2.638518994s: missing components: kube-dns
	I0926 23:21:44.772444  342224 api_server.go:141] control plane version: v1.34.0
	I0926 23:21:44.772462  342224 api_server.go:131] duration metric: took 8.182397ms to wait for apiserver health ...
	I0926 23:21:44.772471  342224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:21:44.773094  342224 addons.go:514] duration metric: took 711.893405ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 23:21:44.775410  342224 system_pods.go:59] 8 kube-system pods found
	I0926 23:21:44.775450  342224 system_pods.go:61] "coredns-66bc5c9577-ctldj" [05f1db7f-ff20-431c-aa47-6a2fcbf31959] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:44.775461  342224 system_pods.go:61] "coredns-66bc5c9577-lk8t4" [bd81c29a-363d-4750-a04f-0f1c612642a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:44.775470  342224 system_pods.go:61] "etcd-bridge-708263" [99c7894b-6f08-426c-95ae-6c66ab3f0b52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:44.775483  342224 system_pods.go:61] "kube-apiserver-bridge-708263" [bf282970-d44c-4acd-80fd-aad4e8c6ef02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:44.775492  342224 system_pods.go:61] "kube-controller-manager-bridge-708263" [6118faf5-ebc4-4548-91ae-4bd4ffce9c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:44.775500  342224 system_pods.go:61] "kube-proxy-9gwxm" [269ce1c5-79f4-4df9-869d-a216d8e70e00] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:44.775507  342224 system_pods.go:61] "kube-scheduler-bridge-708263" [67965fde-80c3-4886-a851-ce2f89c7d60c] Running
	I0926 23:21:44.775515  342224 system_pods.go:61] "storage-provisioner" [d1dec7e1-0608-439c-a398-43099f54c46a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:44.775522  342224 system_pods.go:74] duration metric: took 3.042073ms to wait for pod list to return data ...
	I0926 23:21:44.775531  342224 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:21:44.777564  342224 default_sa.go:45] found service account: "default"
	I0926 23:21:44.777579  342224 default_sa.go:55] duration metric: took 2.042686ms for default service account to be created ...
	I0926 23:21:44.777587  342224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:21:44.781929  342224 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:44.781960  342224 system_pods.go:89] "coredns-66bc5c9577-ctldj" [05f1db7f-ff20-431c-aa47-6a2fcbf31959] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:44.781978  342224 system_pods.go:89] "coredns-66bc5c9577-lk8t4" [bd81c29a-363d-4750-a04f-0f1c612642a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:44.781988  342224 system_pods.go:89] "etcd-bridge-708263" [99c7894b-6f08-426c-95ae-6c66ab3f0b52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:44.781997  342224 system_pods.go:89] "kube-apiserver-bridge-708263" [bf282970-d44c-4acd-80fd-aad4e8c6ef02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:44.782006  342224 system_pods.go:89] "kube-controller-manager-bridge-708263" [6118faf5-ebc4-4548-91ae-4bd4ffce9c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:44.782014  342224 system_pods.go:89] "kube-proxy-9gwxm" [269ce1c5-79f4-4df9-869d-a216d8e70e00] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:44.782020  342224 system_pods.go:89] "kube-scheduler-bridge-708263" [67965fde-80c3-4886-a851-ce2f89c7d60c] Running
	I0926 23:21:44.782027  342224 system_pods.go:89] "storage-provisioner" [d1dec7e1-0608-439c-a398-43099f54c46a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:44.782060  342224 retry.go:31] will retry after 289.530407ms: missing components: kube-dns, kube-proxy
	I0926 23:21:45.000926  342224 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-708263" context rescaled to 1 replicas
	I0926 23:21:45.076145  342224 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:45.076187  342224 system_pods.go:89] "coredns-66bc5c9577-ctldj" [05f1db7f-ff20-431c-aa47-6a2fcbf31959] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:45.076207  342224 system_pods.go:89] "coredns-66bc5c9577-lk8t4" [bd81c29a-363d-4750-a04f-0f1c612642a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:45.076220  342224 system_pods.go:89] "etcd-bridge-708263" [99c7894b-6f08-426c-95ae-6c66ab3f0b52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:21:45.076234  342224 system_pods.go:89] "kube-apiserver-bridge-708263" [bf282970-d44c-4acd-80fd-aad4e8c6ef02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:45.076248  342224 system_pods.go:89] "kube-controller-manager-bridge-708263" [6118faf5-ebc4-4548-91ae-4bd4ffce9c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:45.076262  342224 system_pods.go:89] "kube-proxy-9gwxm" [269ce1c5-79f4-4df9-869d-a216d8e70e00] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:21:45.076273  342224 system_pods.go:89] "kube-scheduler-bridge-708263" [67965fde-80c3-4886-a851-ce2f89c7d60c] Running
	I0926 23:21:45.076282  342224 system_pods.go:89] "storage-provisioner" [d1dec7e1-0608-439c-a398-43099f54c46a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:45.076304  342224 retry.go:31] will retry after 285.127946ms: missing components: kube-dns, kube-proxy
	I0926 23:21:45.365621  342224 system_pods.go:86] 8 kube-system pods found
	I0926 23:21:45.365652  342224 system_pods.go:89] "coredns-66bc5c9577-ctldj" [05f1db7f-ff20-431c-aa47-6a2fcbf31959] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:45.365659  342224 system_pods.go:89] "coredns-66bc5c9577-lk8t4" [bd81c29a-363d-4750-a04f-0f1c612642a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:45.365665  342224 system_pods.go:89] "etcd-bridge-708263" [99c7894b-6f08-426c-95ae-6c66ab3f0b52] Running
	I0926 23:21:45.365672  342224 system_pods.go:89] "kube-apiserver-bridge-708263" [bf282970-d44c-4acd-80fd-aad4e8c6ef02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:21:45.365680  342224 system_pods.go:89] "kube-controller-manager-bridge-708263" [6118faf5-ebc4-4548-91ae-4bd4ffce9c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:21:45.365685  342224 system_pods.go:89] "kube-proxy-9gwxm" [269ce1c5-79f4-4df9-869d-a216d8e70e00] Running
	I0926 23:21:45.365691  342224 system_pods.go:89] "kube-scheduler-bridge-708263" [67965fde-80c3-4886-a851-ce2f89c7d60c] Running
	I0926 23:21:45.365701  342224 system_pods.go:89] "storage-provisioner" [d1dec7e1-0608-439c-a398-43099f54c46a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:21:45.365713  342224 system_pods.go:126] duration metric: took 588.121039ms to wait for k8s-apps to be running ...
	I0926 23:21:45.365726  342224 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:21:45.365783  342224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:21:45.378646  342224 system_svc.go:56] duration metric: took 12.908716ms WaitForService to wait for kubelet
	I0926 23:21:45.378679  342224 kubeadm.go:586] duration metric: took 1.317524609s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:21:45.378702  342224 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:21:45.381525  342224 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:21:45.381555  342224 node_conditions.go:123] node cpu capacity is 8
	I0926 23:21:45.381569  342224 node_conditions.go:105] duration metric: took 2.861761ms to run NodePressure ...
	I0926 23:21:45.381594  342224 start.go:241] waiting for startup goroutines ...
	I0926 23:21:45.381607  342224 start.go:246] waiting for cluster config update ...
	I0926 23:21:45.381620  342224 start.go:255] writing updated cluster config ...
	I0926 23:21:45.381922  342224 ssh_runner.go:195] Run: rm -f paused
	I0926 23:21:45.386496  342224 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:21:45.390343  342224 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ctldj" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:21:47.396154  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	I0926 23:21:47.567507  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:47.567535  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:21:47.567540  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running
	I0926 23:21:47.567546  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running
	I0926 23:21:47.567550  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:47.567554  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:47.567558  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:47.567568  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:47.567582  336025 retry.go:31] will retry after 3.673087072s: missing components: kube-dns
	I0926 23:21:51.244138  336025 system_pods.go:86] 7 kube-system pods found
	I0926 23:21:51.244164  336025 system_pods.go:89] "coredns-66bc5c9577-gg6zc" [d4cc3a87-2d89-4253-8b1b-caec7977d663] Running
	I0926 23:21:51.244169  336025 system_pods.go:89] "etcd-flannel-708263" [b67c74e9-1be4-4b28-86fc-4df39757757e] Running
	I0926 23:21:51.244173  336025 system_pods.go:89] "kube-apiserver-flannel-708263" [3fb4d0c7-587b-4787-a13b-5f3a5a81a926] Running
	I0926 23:21:51.244177  336025 system_pods.go:89] "kube-controller-manager-flannel-708263" [cd1938b5-25fc-4128-aee6-77b9289c1528] Running
	I0926 23:21:51.244180  336025 system_pods.go:89] "kube-proxy-p2nn2" [bc70955b-89dc-4203-8680-04d481236782] Running
	I0926 23:21:51.244183  336025 system_pods.go:89] "kube-scheduler-flannel-708263" [46a0bca9-a84a-4373-b63b-38344b174c1c] Running
	I0926 23:21:51.244186  336025 system_pods.go:89] "storage-provisioner" [bd777efb-bafd-4ecd-a0d2-e17159f39602] Running
	I0926 23:21:51.244195  336025 system_pods.go:126] duration metric: took 17.153823779s to wait for k8s-apps to be running ...
	I0926 23:21:51.244204  336025 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:21:51.244255  336025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:21:51.256328  336025 system_svc.go:56] duration metric: took 12.114496ms WaitForService to wait for kubelet
	I0926 23:21:51.256358  336025 kubeadm.go:586] duration metric: took 17.865741664s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:21:51.256373  336025 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:21:51.258903  336025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:21:51.258922  336025 node_conditions.go:123] node cpu capacity is 8
	I0926 23:21:51.258935  336025 node_conditions.go:105] duration metric: took 2.558197ms to run NodePressure ...
	I0926 23:21:51.258949  336025 start.go:241] waiting for startup goroutines ...
	I0926 23:21:51.258958  336025 start.go:246] waiting for cluster config update ...
	I0926 23:21:51.258970  336025 start.go:255] writing updated cluster config ...
	I0926 23:21:51.259242  336025 ssh_runner.go:195] Run: rm -f paused
	I0926 23:21:51.262584  336025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:21:51.265568  336025 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gg6zc" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.269396  336025 pod_ready.go:94] pod "coredns-66bc5c9577-gg6zc" is "Ready"
	I0926 23:21:51.269415  336025 pod_ready.go:86] duration metric: took 3.82974ms for pod "coredns-66bc5c9577-gg6zc" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.271170  336025 pod_ready.go:83] waiting for pod "etcd-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.274422  336025 pod_ready.go:94] pod "etcd-flannel-708263" is "Ready"
	I0926 23:21:51.274441  336025 pod_ready.go:86] duration metric: took 3.253878ms for pod "etcd-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.276051  336025 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.279294  336025 pod_ready.go:94] pod "kube-apiserver-flannel-708263" is "Ready"
	I0926 23:21:51.279314  336025 pod_ready.go:86] duration metric: took 3.243488ms for pod "kube-apiserver-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.280992  336025 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.666471  336025 pod_ready.go:94] pod "kube-controller-manager-flannel-708263" is "Ready"
	I0926 23:21:51.666500  336025 pod_ready.go:86] duration metric: took 385.484416ms for pod "kube-controller-manager-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:51.866464  336025 pod_ready.go:83] waiting for pod "kube-proxy-p2nn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:52.266173  336025 pod_ready.go:94] pod "kube-proxy-p2nn2" is "Ready"
	I0926 23:21:52.266199  336025 pod_ready.go:86] duration metric: took 399.711032ms for pod "kube-proxy-p2nn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:52.467007  336025 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:52.865798  336025 pod_ready.go:94] pod "kube-scheduler-flannel-708263" is "Ready"
	I0926 23:21:52.865824  336025 pod_ready.go:86] duration metric: took 398.792364ms for pod "kube-scheduler-flannel-708263" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:21:52.865836  336025 pod_ready.go:40] duration metric: took 1.603223946s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:21:52.908056  336025 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:21:52.909682  336025 out.go:179] * Done! kubectl is now configured to use "flannel-708263" cluster and "default" namespace by default
	W0926 23:21:49.895570  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:21:52.395662  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:21:54.396169  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:21:56.895386  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:21:58.895562  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:00.895729  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:05.619179  240327 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I0926 23:22:05.620639  240327 out.go:203] 
	W0926 23:22:05.621646  240327 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W0926 23:22:05.621658  240327 out.go:285] * 
	W0926 23:22:05.623471  240327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 23:22:05.624545  240327 out.go:203] 
	W0926 23:22:03.395477  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:05.896664  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:08.396016  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:10.894981  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:12.895138  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:14.896080  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	W0926 23:22:17.395158  342224 pod_ready.go:104] pod "coredns-66bc5c9577-ctldj" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c92f8c47cec5d       a0af72f2ec6d6       2 minutes ago       Running             kube-controller-manager   1                   6824cd584a7fb       kube-controller-manager-kubernetes-upgrade-655811
	9d31663d053ea       90550c43ad2bc       2 minutes ago       Exited              kube-apiserver            6                   9ab343d838253       kube-apiserver-kubernetes-upgrade-655811
	9c87256f37aac       a0af72f2ec6d6       5 minutes ago       Exited              kube-controller-manager   0                   6824cd584a7fb       kube-controller-manager-kubernetes-upgrade-655811
	ca190779349cb       46169d968e920       7 minutes ago       Running             kube-scheduler            0                   e1cba3559a1c4       kube-scheduler-kubernetes-upgrade-655811
	690b7cda90238       5f1f5298c888d       7 minutes ago       Running             etcd                      0                   d706344529121       etcd-kubernetes-upgrade-655811
	
	
	==> containerd <==
	Sep 26 23:18:01 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:01.503562911Z" level=info msg="StartContainer for \"ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04\" returns successfully"
	Sep 26 23:18:01 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:01.552969355Z" level=info msg="received exit event container_id:\"ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04\"  id:\"ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04\"  pid:3219  exit_status:1  exited_at:{seconds:1758928681  nanos:552681364}"
	Sep 26 23:18:01 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:01.576251939Z" level=info msg="shim disconnected" id=ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04 namespace=k8s.io
	Sep 26 23:18:01 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:01.576298884Z" level=warning msg="cleaning up after shim disconnected" id=ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04 namespace=k8s.io
	Sep 26 23:18:01 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:01.576310164Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 26 23:18:02 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:02.475816355Z" level=info msg="RemoveContainer for \"cf1a5e0f723c51aef2ac47e47bf12415d1f8b96530f35017890b741ac531ad84\""
	Sep 26 23:18:02 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:18:02.478823126Z" level=info msg="RemoveContainer for \"cf1a5e0f723c51aef2ac47e47bf12415d1f8b96530f35017890b741ac531ad84\" returns successfully"
	Sep 26 23:19:36 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:36.715218724Z" level=info msg="received exit event container_id:\"9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4\"  id:\"9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4\"  pid:3101  exit_status:1  exited_at:{seconds:1758928776  nanos:714832253}"
	Sep 26 23:19:36 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:36.738614904Z" level=info msg="shim disconnected" id=9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4 namespace=k8s.io
	Sep 26 23:19:36 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:36.738648520Z" level=warning msg="cleaning up after shim disconnected" id=9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4 namespace=k8s.io
	Sep 26 23:19:36 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:36.738659488Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.613035470Z" level=info msg="CreateContainer within sandbox \"9ab343d838253f9b963212ece5f7f85d07299564a9293238b8022cbe84a934d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:6,}"
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.622109857Z" level=info msg="CreateContainer within sandbox \"9ab343d838253f9b963212ece5f7f85d07299564a9293238b8022cbe84a934d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:6,} returns container id \"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\""
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.622590067Z" level=info msg="StartContainer for \"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\""
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.704308853Z" level=info msg="StartContainer for \"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\" returns successfully"
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.759288798Z" level=info msg="received exit event container_id:\"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\"  id:\"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\"  pid:3312  exit_status:1  exited_at:{seconds:1758928784  nanos:758924815}"
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.784934821Z" level=info msg="shim disconnected" id=9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b namespace=k8s.io
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.784983867Z" level=warning msg="cleaning up after shim disconnected" id=9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b namespace=k8s.io
	Sep 26 23:19:44 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:44.784995929Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 26 23:19:45 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:45.681468311Z" level=info msg="RemoveContainer for \"ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04\""
	Sep 26 23:19:45 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:19:45.685243074Z" level=info msg="RemoveContainer for \"ee45db76fb7149efe0a130aba848b5ff81d0c44869b079eb2d98aaf0f79adc04\" returns successfully"
	Sep 26 23:20:11 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:20:11.664782413Z" level=info msg="CreateContainer within sandbox \"6824cd584a7fb3948fc569f7b4cd3408b2cbbf5346207284547ae60f9a6d1566\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
	Sep 26 23:20:11 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:20:11.675242828Z" level=info msg="CreateContainer within sandbox \"6824cd584a7fb3948fc569f7b4cd3408b2cbbf5346207284547ae60f9a6d1566\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c92f8c47cec5dabdf3d77de74338f400807f1b29152d632c104f508111044638\""
	Sep 26 23:20:11 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:20:11.675818668Z" level=info msg="StartContainer for \"c92f8c47cec5dabdf3d77de74338f400807f1b29152d632c104f508111044638\""
	Sep 26 23:20:11 kubernetes-upgrade-655811 containerd[1965]: time="2025-09-26T23:20:11.753119852Z" level=info msg="StartContainer for \"c92f8c47cec5dabdf3d77de74338f400807f1b29152d632c104f508111044638\" returns successfully"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a f0 80 e2 b6 c4 08 06
	[ +15.545396] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a cf fe 98 28 55 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a f0 80 e2 b6 c4 08 06
	[Sep26 23:21] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5e 07 fe b1 b9 2b 08 06
	[ +19.252894] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 52 8f 59 94 11 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5e 07 fe b1 b9 2b 08 06
	[ +12.784684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 0f 7d af f9 f2 08 06
	[  +0.000298] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da e5 92 2b 1b 76 08 06
	[  +3.106691] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 7b 6b 94 f3 f6 08 06
	[Sep26 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 36 c4 bd 64 8f 08 06
	[  +0.000336] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 7b 6b 94 f3 f6 08 06
	[ +23.662179] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 40 17 20 93 f3 08 06
	[  +0.000336] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e5 92 2b 1b 76 08 06
	
	
	==> etcd [690b7cda90238c771c650d91f7b7447529e7d8e5f2caa11cc75b84d404a35f73] <==
	{"level":"info","ts":"2025-09-26T23:14:45.986132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2025-09-26T23:14:45.986178Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2025-09-26T23:14:45.986243Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-26T23:14:45.986316Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-09-26T23:14:45.986348Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2025-09-26T23:14:45.986949Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-09-26T23:14:45.986991Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-09-26T23:14:45.987010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2025-09-26T23:14:45.987028Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-09-26T23:14:45.987558Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-09-26T23:14:45.987905Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-655811 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-26T23:14:45.987942Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-26T23:14:45.988074Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-26T23:14:45.988160Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-09-26T23:14:45.988255Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-26T23:14:45.988294Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-26T23:14:45.988349Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-09-26T23:14:45.988429Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-09-26T23:14:45.989207Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-26T23:14:45.989453Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"warn","ts":"2025-09-26T23:14:45.989584Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-09-26T23:14:45.989595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-26T23:14:45.990055Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-26T23:14:45.993537Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-26T23:14:45.994127Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:23:22 up  1:05,  0 users,  load average: 1.52, 2.74, 2.25
	Linux kubernetes-upgrade-655811 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b": Process exited with status 1
	stdout:
	
	stderr:
	E0926 23:23:22.269639    3823 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\": not found" containerID="9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b"
	time="2025-09-26T23:23:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b\": not found"
	
	
	==> kube-controller-manager [9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4": Process exited with status 1
	stdout:
	
	stderr:
	E0926 23:23:22.305415    3834 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4\": not found" containerID="9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4"
	time="2025-09-26T23:23:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4\": not found"
	
	
	==> kube-controller-manager [c92f8c47cec5dabdf3d77de74338f400807f1b29152d632c104f508111044638] <==
	I0926 23:20:12.714908       1 serving.go:386] Generated self-signed cert in-memory
	I0926 23:20:12.997545       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 23:20:12.997578       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:20:12.998900       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 23:20:12.998899       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 23:20:12.999164       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 23:20:12.999195       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 23:23:14.008652       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: the server was unable to return a response in the time allotted, but may still be processing the request"
	
	
	==> kube-scheduler [ca190779349cb50151bab6187679c5d33d29a3fa71f0da322bcdb0409666f2c7] <==
	I0926 23:14:46.359506       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:15:47.017946       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	W0926 23:15:47.017995       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:15:47.018008       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:15:47.032065       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 23:15:47.032089       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:15:47.035463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:15:47.035505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:15:47.035822       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 23:15:47.036086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:15:47.136060       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0926 23:16:21.139240       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.1868f831c55cf6a1  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-09-26 23:15:47.136456867 +0000 UTC m=+61.256730101,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-655811,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner 8cc5063b-d5c7-4011-b388-cabd997ec5d9 v1 318 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,Deprec
atedCount:0,}"
	E0926 23:16:21.141398       1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	E0926 23:21:21.145558       1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	E0926 23:21:21.145615       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.1868f831c55cf6a1  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-09-26 23:15:47.136456867 +0000 UTC m=+61.256730101,Series:&EventSeries{Count:2,LastObservedTime:2025-09-26 23:20:47.143320399 +0000 UTC m=+361.263593874,},ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-655811,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner 8cc5063b-d5c7-4011-b388-cabd997ec5d9 v1 318 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimesta
mp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	
	
	==> kubelet <==
	Sep 26 23:22:22 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:22:22.963006    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-655811" podUID="61bcad73-85c9-4a0e-b660-45db8fcee29a"
	Sep 26 23:22:31 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:31.983844    1112 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-kubernetes-upgrade-655811)" podUID="9639febd397ed8d6e55e35ee752883c7" pod="kube-system/kube-controller-manager-kubernetes-upgrade-655811"
	Sep 26 23:22:33 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:33.030405    1112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-655811?timeout=10s\": context deadline exceeded" interval="7s"
	Sep 26 23:22:35 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:35.956840    1112 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-apiserver-kubernetes-upgrade-655811"
	Sep 26 23:22:35 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:22:35.956959    1112 scope.go:117] "RemoveContainer" containerID="9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b"
	Sep 26 23:22:37 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:22:37.020084    1112 scope.go:117] "RemoveContainer" containerID="9d31663d053ea85bf2d188b55cc161a6c950582b9921a8203dda87f969909e9b"
	Sep 26 23:22:37 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:22:37.020248    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-655811" podUID="90b0f03a-4536-4826-bb15-3c3d9c1d80af"
	Sep 26 23:22:38 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:38.417841    1112 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-26T23:22:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-26T23:22:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-26T23:22:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-26T23:22:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\\\",\\\"registry.k8s.io/etcd:3.6.4-0\\\"],\\\"sizeBytes\\\":74311308},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:
fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812\\\",\\\"registry.k8s.io/kube-apiserver:v1.34.0\\\"],\\\"sizeBytes\\\":27066504},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81\\\",\\\"registry.k8s.io/kube-controller-manager:v1.34.0\\\"],\\\"sizeBytes\\\":22819719},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff\\\",\\\"registry.k8s.io/kube-scheduler:v1.34.0\\\"],\\\"sizeBytes\\\":17385558},{\\\"names\\\":[\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\"],\\\"sizeBytes\\\":9057171},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448}]}}\" for node \"kubernetes-upgrade-655811\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-655811/status?timeout=10s\": net/http: request c
anceled (Client.Timeout exceeded while awaiting headers)"
	Sep 26 23:22:46 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:22:46.963021    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-655811" podUID="a86cad92-6a5c-4943-a47c-945edbc629b1"
	Sep 26 23:22:48 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:48.418800    1112 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-655811\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-655811?timeout=10s\": context deadline exceeded"
	Sep 26 23:22:50 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:50.031354    1112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-655811?timeout=10s\": context deadline exceeded" interval="7s"
	Sep 26 23:22:56 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:56.965294    1112 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-kubernetes-upgrade-655811"
	Sep 26 23:22:58 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:22:58.419830    1112 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-655811\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes kubernetes-upgrade-655811)"
	Sep 26 23:23:07 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:07.033144    1112 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io kubernetes-upgrade-655811)" interval="7s"
	Sep 26 23:23:08 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:08.420529    1112 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-655811\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-655811?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 26 23:23:11 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:11.022084    1112 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-apiserver-kubernetes-upgrade-655811"
	Sep 26 23:23:11 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:23:11.022187    1112 scope.go:117] "RemoveContainer" containerID="a3ad4bda49660c70ca2d86f58893b990033fa00177c088fbe1938c88f7b66998"
	Sep 26 23:23:11 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:11.022408    1112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-655811_kube-system(5ccbefa6a8b7cc07add939a7735719ec)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-655811" podUID="5ccbefa6a8b7cc07add939a7735719ec"
	Sep 26 23:23:11 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:23:11.083888    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-655811" podUID="90b0f03a-4536-4826-bb15-3c3d9c1d80af"
	Sep 26 23:23:11 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:23:11.963299    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-655811" podUID="8b3fd534-ff0c-4dd6-9b14-ce205afe6892"
	Sep 26 23:23:14 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:23:14.090170    1112 scope.go:117] "RemoveContainer" containerID="9c87256f37aac81fbc782779d0910f5bb98345e4f6937b526cfbe9588224d4e4"
	Sep 26 23:23:14 kubernetes-upgrade-655811 kubelet[1112]: I0926 23:23:14.090446    1112 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-655811" podUID="61bcad73-85c9-4a0e-b660-45db8fcee29a"
	Sep 26 23:23:18 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:18.421193    1112 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-655811\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-655811?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 26 23:23:18 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:18.421245    1112 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Sep 26 23:23:20 kubernetes-upgrade-655811 kubelet[1112]: E0926 23:23:20.965320    1112 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/etcd-kubernetes-upgrade-655811"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-655811 -n kubernetes-upgrade-655811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-655811 -n kubernetes-upgrade-655811: exit status 2 (15.871195559s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-655811" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-655811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-655811
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-655811: (2.515936994s)
--- FAIL: TestKubernetesUpgrade (631.04s)

                                                
                                    

Test pass (294/331)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.17
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.19
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.34.0/json-events 11.98
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.05
18 TestDownloadOnly/v1.34.0/DeleteAll 0.19
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.11
21 TestBinaryMirror 0.76
22 TestOffline 59.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 153.97
29 TestAddons/serial/Volcano 39.55
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.43
35 TestAddons/parallel/Registry 16.81
36 TestAddons/parallel/RegistryCreds 0.6
37 TestAddons/parallel/Ingress 22.39
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.82
41 TestAddons/parallel/CSI 38.47
42 TestAddons/parallel/Headlamp 17.4
43 TestAddons/parallel/CloudSpanner 5.49
44 TestAddons/parallel/LocalPath 16.13
45 TestAddons/parallel/NvidiaDevicePlugin 5.49
46 TestAddons/parallel/Yakd 10.64
47 TestAddons/parallel/AmdGpuDevicePlugin 5.46
48 TestAddons/StoppedEnableDisable 12.14
49 TestCertOptions 25.06
50 TestCertExpiration 211.49
52 TestForceSystemdFlag 23.77
53 TestForceSystemdEnv 36.02
55 TestKVMDriverInstallOrUpdate 0.63
59 TestErrorSpam/setup 19.97
60 TestErrorSpam/start 0.56
61 TestErrorSpam/status 0.85
62 TestErrorSpam/pause 1.34
63 TestErrorSpam/unpause 1.41
64 TestErrorSpam/stop 1.37
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 43.33
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 5.78
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
76 TestFunctional/serial/CacheCmd/cache/add_local 1.84
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.04
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 40.01
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.28
87 TestFunctional/serial/LogsFileCmd 1.3
88 TestFunctional/serial/InvalidService 4.24
90 TestFunctional/parallel/ConfigCmd 0.31
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.86
99 TestFunctional/parallel/AddonsCmd 0.11
102 TestFunctional/parallel/SSHCmd 0.61
103 TestFunctional/parallel/CpCmd 1.69
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.48
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.48
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
132 TestFunctional/parallel/ProfileCmd/profile_list 0.35
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
134 TestFunctional/parallel/MountCmd/any-port 6.54
135 TestFunctional/parallel/MountCmd/specific-port 1.79
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.37
142 TestFunctional/parallel/ImageCommands/Setup 1.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.79
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
150 TestFunctional/parallel/ServiceCmd/List 1.67
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 91.61
163 TestMultiControlPlane/serial/DeployApp 18.37
164 TestMultiControlPlane/serial/PingHostFromPods 0.99
165 TestMultiControlPlane/serial/AddWorkerNode 12.64
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
168 TestMultiControlPlane/serial/CopyFile 15.67
169 TestMultiControlPlane/serial/StopSecondaryNode 12.52
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.4
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 86.69
174 TestMultiControlPlane/serial/DeleteSecondaryNode 8.89
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
176 TestMultiControlPlane/serial/StopCluster 35.69
177 TestMultiControlPlane/serial/RestartCluster 51.33
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
179 TestMultiControlPlane/serial/AddSecondaryNode 31.38
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
184 TestJSONOutput/start/Command 44.12
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.6
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.57
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.65
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.19
209 TestKicCustomNetwork/create_custom_network 33.97
210 TestKicCustomNetwork/use_default_bridge_network 22.92
211 TestKicExistingNetwork 21.98
212 TestKicCustomSubnet 22.52
213 TestKicStaticIP 22.4
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 44.36
218 TestMountStart/serial/StartWithMountFirst 5.16
219 TestMountStart/serial/VerifyMountFirst 0.24
220 TestMountStart/serial/StartWithMountSecond 5.46
221 TestMountStart/serial/VerifyMountSecond 0.24
222 TestMountStart/serial/DeleteFirst 1.6
223 TestMountStart/serial/VerifyMountPostDelete 0.24
224 TestMountStart/serial/Stop 1.16
225 TestMountStart/serial/RestartStopped 7.67
226 TestMountStart/serial/VerifyMountPostStop 0.24
229 TestMultiNode/serial/FreshStart2Nodes 54.59
230 TestMultiNode/serial/DeployApp2Nodes 18.15
231 TestMultiNode/serial/PingHostFrom2Pods 0.69
232 TestMultiNode/serial/AddNode 12.25
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.62
235 TestMultiNode/serial/CopyFile 8.9
236 TestMultiNode/serial/StopNode 2.12
237 TestMultiNode/serial/StartAfterStop 6.79
238 TestMultiNode/serial/RestartKeepsNodes 72.14
239 TestMultiNode/serial/DeleteNode 5.04
240 TestMultiNode/serial/StopMultiNode 23.8
241 TestMultiNode/serial/RestartMultiNode 49.13
242 TestMultiNode/serial/ValidateNameConflict 22.17
247 TestPreload 116.59
249 TestScheduledStopUnix 95.55
252 TestInsufficientStorage 8.94
253 TestRunningBinaryUpgrade 53.53
256 TestMissingContainerUpgrade 80.15
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestStoppedBinaryUpgrade/Setup 2.67
260 TestNoKubernetes/serial/StartWithK8s 37.29
261 TestStoppedBinaryUpgrade/Upgrade 64.05
262 TestNoKubernetes/serial/StartWithStopK8s 23.84
263 TestNoKubernetes/serial/Start 8.1
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
266 TestNoKubernetes/serial/ProfileList 1.03
274 TestNoKubernetes/serial/Stop 1.18
275 TestNoKubernetes/serial/StartNoArgs 7.23
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
284 TestNetworkPlugins/group/false 4.93
289 TestPause/serial/Start 46.29
291 TestStartStop/group/old-k8s-version/serial/FirstStart 52.02
292 TestPause/serial/SecondStartNoReconfiguration 5.29
293 TestPause/serial/Pause 0.66
294 TestPause/serial/VerifyStatus 0.28
295 TestPause/serial/Unpause 0.57
296 TestPause/serial/PauseAgain 0.66
297 TestPause/serial/DeletePaused 2.58
298 TestPause/serial/VerifyDeletedResources 18.01
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.25
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
301 TestStartStop/group/old-k8s-version/serial/Stop 13.21
303 TestStartStop/group/no-preload/serial/FirstStart 48.99
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
305 TestStartStop/group/old-k8s-version/serial/SecondStart 49.99
306 TestStartStop/group/no-preload/serial/DeployApp 10.26
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
308 TestStartStop/group/no-preload/serial/Stop 11.94
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/old-k8s-version/serial/Pause 2.56
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
314 TestStartStop/group/no-preload/serial/SecondStart 51.15
316 TestStartStop/group/embed-certs/serial/FirstStart 46.45
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.84
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/embed-certs/serial/DeployApp 9.22
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
323 TestStartStop/group/embed-certs/serial/Stop 12.05
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
325 TestStartStop/group/no-preload/serial/Pause 2.66
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
328 TestStartStop/group/newest-cni/serial/FirstStart 27.73
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/embed-certs/serial/SecondStart 50.3
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.7
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
337 TestStartStop/group/newest-cni/serial/Stop 1.22
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
339 TestStartStop/group/newest-cni/serial/SecondStart 11.46
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
343 TestStartStop/group/newest-cni/serial/Pause 2.42
344 TestNetworkPlugins/group/auto/Start 44.46
345 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
348 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
349 TestStartStop/group/embed-certs/serial/Pause 2.52
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
351 TestNetworkPlugins/group/kindnet/Start 73.43
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
354 TestNetworkPlugins/group/calico/Start 45.65
355 TestNetworkPlugins/group/auto/KubeletFlags 0.3
356 TestNetworkPlugins/group/auto/NetCatPod 9.2
357 TestNetworkPlugins/group/auto/DNS 0.13
358 TestNetworkPlugins/group/auto/Localhost 0.12
359 TestNetworkPlugins/group/auto/HairPin 0.1
360 TestNetworkPlugins/group/custom-flannel/Start 41.93
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.27
363 TestNetworkPlugins/group/calico/NetCatPod 9.19
364 TestNetworkPlugins/group/calico/DNS 0.13
365 TestNetworkPlugins/group/calico/Localhost 0.1
366 TestNetworkPlugins/group/calico/HairPin 0.1
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
369 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
370 TestNetworkPlugins/group/enable-default-cni/Start 35.31
371 TestNetworkPlugins/group/kindnet/DNS 0.15
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
373 TestNetworkPlugins/group/kindnet/Localhost 0.13
374 TestNetworkPlugins/group/kindnet/HairPin 0.11
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
376 TestNetworkPlugins/group/custom-flannel/DNS 0.13
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
379 TestNetworkPlugins/group/flannel/Start 46.85
380 TestNetworkPlugins/group/bridge/Start 64.49
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
388 TestNetworkPlugins/group/flannel/NetCatPod 9.16
389 TestNetworkPlugins/group/flannel/DNS 0.13
390 TestNetworkPlugins/group/flannel/Localhost 0.1
391 TestNetworkPlugins/group/flannel/HairPin 0.1
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
393 TestNetworkPlugins/group/bridge/NetCatPod 9.17
394 TestNetworkPlugins/group/bridge/DNS 0.12
395 TestNetworkPlugins/group/bridge/Localhost 0.1
396 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (13.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-421026 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-421026 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.165871006s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0926 22:29:08.945746   13040 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0926 22:29:08.945864   13040 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-421026
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-421026: exit status 85 (54.757367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-421026 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-421026 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:28:55
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:28:55.817131   13054 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:28:55.817377   13054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:55.817387   13054 out.go:374] Setting ErrFile to fd 2...
	I0926 22:28:55.817391   13054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:55.817584   13054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	W0926 22:28:55.817739   13054 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21642-9508/.minikube/config/config.json: open /home/jenkins/minikube-integration/21642-9508/.minikube/config/config.json: no such file or directory
	I0926 22:28:55.818248   13054 out.go:368] Setting JSON to true
	I0926 22:28:55.819129   13054 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":671,"bootTime":1758925065,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:28:55.819210   13054 start.go:140] virtualization: kvm guest
	I0926 22:28:55.821199   13054 out.go:99] [download-only-421026] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0926 22:28:55.821318   13054 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 22:28:55.821354   13054 notify.go:220] Checking for updates...
	I0926 22:28:55.822482   13054 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:28:55.823598   13054 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:28:55.824633   13054 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:28:55.825578   13054 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:28:55.826584   13054 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:28:55.828400   13054 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:28:55.828613   13054 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:28:55.850710   13054 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:28:55.850796   13054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:56.179219   13054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-26 22:28:56.169214154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:56.179320   13054 docker.go:318] overlay module found
	I0926 22:28:56.180543   13054 out.go:99] Using the docker driver based on user configuration
	I0926 22:28:56.180570   13054 start.go:304] selected driver: docker
	I0926 22:28:56.180583   13054 start.go:924] validating driver "docker" against <nil>
	I0926 22:28:56.180656   13054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:56.238511   13054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-26 22:28:56.228519956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:56.238698   13054 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:28:56.239245   13054 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:28:56.239389   13054 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:28:56.240691   13054 out.go:171] Using Docker driver with root privileges
	I0926 22:28:56.241612   13054 cni.go:84] Creating CNI manager for ""
	I0926 22:28:56.241666   13054 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 22:28:56.241677   13054 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 22:28:56.241734   13054 start.go:348] cluster config:
	{Name:download-only-421026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-421026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:28:56.242826   13054 out.go:99] Starting "download-only-421026" primary control-plane node in "download-only-421026" cluster
	I0926 22:28:56.242848   13054 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0926 22:28:56.243775   13054 out.go:99] Pulling base image v0.0.48 ...
	I0926 22:28:56.243820   13054 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0926 22:28:56.243867   13054 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:28:56.259015   13054 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:28:56.259173   13054 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:28:56.259256   13054 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:28:56.342506   13054 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0926 22:28:56.342531   13054 cache.go:58] Caching tarball of preloaded images
	I0926 22:28:56.342682   13054 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0926 22:28:56.344205   13054 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0926 22:28:56.344219   13054 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0926 22:28:56.438917   13054 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0926 22:29:04.676902   13054 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-421026 host does not exist
	  To start a cluster, run: "minikube start -p download-only-421026"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-421026
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (11.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-661982 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-661982 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.974733493s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (11.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0926 22:29:21.284766   13040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0926 22:29:21.284810   13040 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-661982
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-661982: exit status 85 (54.022752ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-421026 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-421026 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-421026                                                                                                                                                               │ download-only-421026 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-661982 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-661982 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:09.347133   13421 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:09.347221   13421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:09.347233   13421 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:09.347239   13421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:09.347425   13421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:29:09.347860   13421 out.go:368] Setting JSON to true
	I0926 22:29:09.348705   13421 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":684,"bootTime":1758925065,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:09.348793   13421 start.go:140] virtualization: kvm guest
	I0926 22:29:09.350340   13421 out.go:99] [download-only-661982] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:09.350504   13421 notify.go:220] Checking for updates...
	I0926 22:29:09.351510   13421 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:29:09.352665   13421 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:09.353939   13421 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:29:09.355016   13421 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:29:09.356005   13421 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:29:09.357742   13421 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:29:09.357947   13421 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:09.379487   13421 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:09.379535   13421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:09.429796   13421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:09.420900534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:09.429925   13421 docker.go:318] overlay module found
	I0926 22:29:09.431385   13421 out.go:99] Using the docker driver based on user configuration
	I0926 22:29:09.431416   13421 start.go:304] selected driver: docker
	I0926 22:29:09.431422   13421 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:09.431520   13421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:09.483395   13421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:09.474313066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:09.483576   13421 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:09.484100   13421 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:29:09.484230   13421 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:29:09.485783   13421 out.go:171] Using Docker driver with root privileges
	I0926 22:29:09.486804   13421 cni.go:84] Creating CNI manager for ""
	I0926 22:29:09.486854   13421 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0926 22:29:09.486864   13421 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:09.486914   13421 start.go:348] cluster config:
	{Name:download-only-661982 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-661982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:09.488045   13421 out.go:99] Starting "download-only-661982" primary control-plane node in "download-only-661982" cluster
	I0926 22:29:09.488068   13421 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0926 22:29:09.489028   13421 out.go:99] Pulling base image v0.0.48 ...
	I0926 22:29:09.489050   13421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 22:29:09.489157   13421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:09.504137   13421 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:09.504241   13421 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:09.504256   13421 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:09.504260   13421 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:09.504275   13421 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:09.852339   13421 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0926 22:29:09.852367   13421 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:09.852520   13421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0926 22:29:09.854103   13421 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0926 22:29:09.854118   13421 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0926 22:29:09.954129   13421 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b7b36e7513c2e517ecf49b6f3ce02cf -> /home/jenkins/minikube-integration/21642-9508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-661982 host does not exist
	  To start a cluster, run: "minikube start -p download-only-661982"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-661982
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.11s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-296284 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-296284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-296284
--- PASS: TestDownloadOnlyKic (1.11s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 22:29:22.987246   13040 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-700148 --alsologtostderr --binary-mirror http://127.0.0.1:33947 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-700148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-700148
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (59.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-013846 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-013846 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.568297993s)
helpers_test.go:175: Cleaning up "offline-containerd-013846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-013846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-013846: (2.445053962s)
--- PASS: TestOffline (59.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-048605
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-048605: exit status 85 (47.201926ms)

                                                
                                                
-- stdout --
	* Profile "addons-048605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-048605
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-048605: exit status 85 (46.30556ms)

                                                
                                                
-- stdout --
	* Profile "addons-048605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (153.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-048605 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-048605 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m33.968375882s)
--- PASS: TestAddons/Setup (153.97s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.55s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 12.931135ms
addons_test.go:868: volcano-scheduler stabilized in 12.974618ms
addons_test.go:884: volcano-controller stabilized in 13.030773ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-hgrc8" [743ec981-2dc4-4e52-9c06-8ce26143a39d] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002811845s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-fdnb2" [a8719665-f4e4-4256-8795-0e29d2b9cb23] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002295449s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-7s2xm" [ea3385e9-cd31-401f-9a61-0bd76941cd2f] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003147127s
addons_test.go:903: (dbg) Run:  kubectl --context addons-048605 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-048605 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-048605 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [d95cdad3-66e0-4023-bbbf-da8a4a38bb26] Pending
helpers_test.go:352: "test-job-nginx-0" [d95cdad3-66e0-4023-bbbf-da8a4a38bb26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [d95cdad3-66e0-4023-bbbf-da8a4a38bb26] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.002855156s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable volcano --alsologtostderr -v=1: (11.231909335s)
--- PASS: TestAddons/serial/Volcano (39.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-048605 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-048605 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-048605 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-048605 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3078a92c-8159-4482-a2a5-6537d6cfa164] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3078a92c-8159-4482-a2a5-6537d6cfa164] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003418751s
addons_test.go:694: (dbg) Run:  kubectl --context addons-048605 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-048605 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-048605 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 21.137085ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-w96xz" [fa9d4bba-17f9-4ca3-a5ed-bafe865541a9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.207267251s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2xrgt" [1e40b1a0-e9e4-458e-bddf-8fcaec71fa98] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.002808424s
addons_test.go:392: (dbg) Run:  kubectl --context addons-048605 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-048605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-048605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.763245969s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 ip
2025/09/26 22:33:11 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.81s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.6s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.371268ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-048605
addons_test.go:332: (dbg) Run:  kubectl --context addons-048605 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.60s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-048605 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-048605 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-048605 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a2f951d1-8d2c-466f-857d-d6b96a908073] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a2f951d1-8d2c-466f-857d-d6b96a908073] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.002951269s
I0926 22:33:19.435543   13040 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-048605 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable ingress-dns --alsologtostderr -v=1: (1.557484226s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable ingress --alsologtostderr -v=1: (7.664772245s)
--- PASS: TestAddons/parallel/Ingress (22.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mlhv6" [dcc99f4e-8dc9-4e23-8e9f-8e5b6c59912f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.031439711s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 21.20356ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-s6n4c" [d334fedf-3b6d-41f6-b3de-c562608963c6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.129862877s
addons_test.go:463: (dbg) Run:  kubectl --context addons-048605 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 22:33:13.699795   13040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 22:33:13.701892   13040 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 22:33:13.701912   13040 kapi.go:107] duration metric: took 2.135115ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.144535ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-048605 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-048605 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3ec91a96-aad9-47f4-9ae9-65316f49d326] Pending
helpers_test.go:352: "task-pv-pod" [3ec91a96-aad9-47f4-9ae9-65316f49d326] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3ec91a96-aad9-47f4-9ae9-65316f49d326] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003124617s
addons_test.go:572: (dbg) Run:  kubectl --context addons-048605 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-048605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-048605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-048605 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-048605 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-048605 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-048605 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [47ac9231-d2ce-4a2f-aca8-a4d63b64493e] Pending
helpers_test.go:352: "task-pv-pod-restore" [47ac9231-d2ce-4a2f-aca8-a4d63b64493e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [47ac9231-d2ce-4a2f-aca8-a4d63b64493e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.002429786s
addons_test.go:614: (dbg) Run:  kubectl --context addons-048605 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-048605 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-048605 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.489629012s)
--- PASS: TestAddons/parallel/CSI (38.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-048605 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-l7tmv" [51ec907b-9f7c-4e2a-b35b-b88c741bf462] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-l7tmv" [51ec907b-9f7c-4e2a-b35b-b88c741bf462] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-l7tmv" [51ec907b-9f7c-4e2a-b35b-b88c741bf462] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004035232s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable headlamp --alsologtostderr -v=1: (5.675483508s)
--- PASS: TestAddons/parallel/Headlamp (17.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-td4r4" [633f237b-70fd-47e9-bd61-36a88f9a5e28] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003741549s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-048605 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-048605 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f53fdcd8-5d76-4e8a-a899-09060be103c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f53fdcd8-5d76-4e8a-a899-09060be103c6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f53fdcd8-5d76-4e8a-a899-09060be103c6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.002565096s
addons_test.go:967: (dbg) Run:  kubectl --context addons-048605 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 ssh "cat /opt/local-path-provisioner/pvc-8d02d742-b1cb-40fd-8405-10d79a57af25_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-048605 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-048605 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lnp7m" [9ef452e1-8ff4-4d51-9a6b-4737b4246348] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006003753s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-trw6d" [97f615b4-97f4-4cea-ae72-b72727b91d5e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004184535s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-048605 addons disable yakd --alsologtostderr -v=1: (5.631805325s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-dvgjw" [d56210c7-f7a8-4adb-830d-12aaccb46ba9] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.005654849s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-048605 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-048605
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-048605: (11.915803931s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-048605
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-048605
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-048605
--- PASS: TestAddons/StoppedEnableDisable (12.14s)

                                                
                                    
x
+
TestCertOptions (25.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-505421 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-505421 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.554831035s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-505421 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-505421 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-505421 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-505421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-505421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-505421: (1.924244886s)
--- PASS: TestCertOptions (25.06s)

                                                
                                    
x
+
TestCertExpiration (211.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-767430 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-767430 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (22.716253344s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-767430 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-767430 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.303432606s)
helpers_test.go:175: Cleaning up "cert-expiration-767430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-767430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-767430: (2.466697418s)
--- PASS: TestCertExpiration (211.49s)

                                                
                                    
x
+
TestForceSystemdFlag (23.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-028237 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-028237 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.170609081s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-028237 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-028237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-028237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-028237: (2.351583435s)
--- PASS: TestForceSystemdFlag (23.77s)

                                                
                                    
x
+
TestForceSystemdEnv (36.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-100656 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-100656 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.73189403s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-100656 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-100656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-100656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-100656: (1.976521598s)
--- PASS: TestForceSystemdEnv (36.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.63s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0926 23:14:39.702373   13040 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 23:14:39.702561   13040 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3286838667/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:14:39.730131   13040 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3286838667/001/docker-machine-driver-kvm2 version is 1.1.1
W0926 23:14:39.730169   13040 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0926 23:14:39.730276   13040 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0926 23:14:39.730310   13040 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3286838667/001/docker-machine-driver-kvm2
I0926 23:14:40.197925   13040 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3286838667/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:14:40.214427   13040 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3286838667/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.63s)

                                                
                                    
x
+
TestErrorSpam/setup (19.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-672283 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-672283 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-672283 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-672283 --driver=docker  --container-runtime=containerd: (19.971762782s)
--- PASS: TestErrorSpam/setup (19.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 unpause
--- PASS: TestErrorSpam/unpause (1.41s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 stop: (1.199159526s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-672283 --log_dir /tmp/nospam-672283 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21642-9508/.minikube/files/etc/test/nested/copy/13040/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-459506 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (43.328105228s)
--- PASS: TestFunctional/serial/StartWithProxy (43.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 22:36:00.528580   13040 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-459506 --alsologtostderr -v=8: (5.77536028s)
functional_test.go:678: soft start took 5.776104876s for "functional-459506" cluster.
I0926 22:36:06.304326   13040 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (5.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-459506 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 cache add registry.k8s.io/pause:3.3: (1.014795438s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-459506 /tmp/TestFunctionalserialCacheCmdcacheadd_local2852370968/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache add minikube-local-cache-test:functional-459506
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 cache add minikube-local-cache-test:functional-459506: (1.546568115s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache delete minikube-local-cache-test:functional-459506
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-459506
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.722043ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 kubectl -- --context functional-459506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-459506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-459506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.013007388s)
functional_test.go:776: restart took 40.013122784s for "functional-459506" cluster.
I0926 22:36:53.167612   13040 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-459506 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs: (1.281895188s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 logs --file /tmp/TestFunctionalserialLogsFileCmd4028582142/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 logs --file /tmp/TestFunctionalserialLogsFileCmd4028582142/001/logs.txt: (1.301489152s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-459506 apply -f testdata/invalidsvc.yaml
E0926 22:36:57.766297   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:57.772660   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:57.784035   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:57.805386   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:57.846696   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:57.928047   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:58.089515   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:36:58.411163   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-459506
E0926 22:36:59.053435   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-459506: exit status 115 (313.706105ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31679 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-459506 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 config get cpus: exit status 14 (59.015144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 config get cpus
E0926 22:37:00.335338   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 config get cpus: exit status 14 (49.061157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (141.200146ms)

                                                
                                                
-- stdout --
	* [functional-459506] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:42:46.565450   62498 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.565553   62498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.565564   62498 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.565571   62498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.565809   62498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.566236   62498 out.go:368] Setting JSON to false
	I0926 22:42:46.567204   62498 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.567293   62498 start.go:140] virtualization: kvm guest
	I0926 22:42:46.568887   62498 out.go:179] * [functional-459506] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.570240   62498 notify.go:220] Checking for updates...
	I0926 22:42:46.570251   62498 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.571235   62498 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.572198   62498 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.573178   62498 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.574257   62498 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.575214   62498 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.579276   62498 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.579858   62498 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.603119   62498 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.603185   62498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.655847   62498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.646296367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.655944   62498 docker.go:318] overlay module found
	I0926 22:42:46.657489   62498 out.go:179] * Using the docker driver based on existing profile
	I0926 22:42:46.658489   62498 start.go:304] selected driver: docker
	I0926 22:42:46.658500   62498 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.658568   62498 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.660042   62498 out.go:203] 
	W0926 22:42:46.660985   62498 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 22:42:46.661970   62498 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-459506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (140.416051ms)

                                                
                                                
-- stdout --
	* [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:42:46.896482   62711 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:42:46.896577   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896585   62711 out.go:374] Setting ErrFile to fd 2...
	I0926 22:42:46.896589   62711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:42:46.896870   62711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:42:46.897298   62711 out.go:368] Setting JSON to false
	I0926 22:42:46.898165   62711 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1502,"bootTime":1758925065,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:42:46.898235   62711 start.go:140] virtualization: kvm guest
	I0926 22:42:46.899971   62711 out.go:179] * [functional-459506] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:42:46.900984   62711 notify.go:220] Checking for updates...
	I0926 22:42:46.900989   62711 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:42:46.901968   62711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:42:46.902910   62711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 22:42:46.904148   62711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 22:42:46.905129   62711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:42:46.906068   62711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:42:46.907486   62711 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:42:46.908008   62711 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:42:46.930088   62711 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:42:46.930160   62711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:42:46.982478   62711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:42:46.973277108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:42:46.982575   62711 docker.go:318] overlay module found
	I0926 22:42:46.984004   62711 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:42:46.985050   62711 start.go:304] selected driver: docker
	I0926 22:42:46.985070   62711 start.go:924] validating driver "docker" against &{Name:functional-459506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-459506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:42:46.985173   62711 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:42:46.986851   62711 out.go:203] 
	W0926 22:42:46.987810   62711 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:42:46.988796   62711 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh -n functional-459506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cp functional-459506:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1386564896/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh -n functional-459506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh -n functional-459506 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13040/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /etc/test/nested/copy/13040/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13040.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /etc/ssl/certs/13040.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13040.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /usr/share/ca-certificates/13040.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/130402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /etc/ssl/certs/130402.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/130402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /usr/share/ca-certificates/130402.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-459506 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "sudo systemctl is-active docker": exit status 1 (282.555285ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "sudo systemctl is-active crio": exit status 1 (314.562231ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 55027: os: process already finished
helpers_test.go:519: unable to terminate pid 54621: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-459506 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "302.312178ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "45.047504ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "304.456641ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "43.717868ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdany-port1539084006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758926549287286381" to /tmp/TestFunctionalparallelMountCmdany-port1539084006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758926549287286381" to /tmp/TestFunctionalparallelMountCmdany-port1539084006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758926549287286381" to /tmp/TestFunctionalparallelMountCmdany-port1539084006/001/test-1758926549287286381
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.637403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:42:29.537216   13040 retry.go:31] will retry after 538.879319ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 26 22:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 26 22:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 26 22:42 test-1758926549287286381
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh cat /mount-9p/test-1758926549287286381
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-459506 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [205cad34-1a32-4816-b090-7d156301fcf5] Pending
helpers_test.go:352: "busybox-mount" [205cad34-1a32-4816-b090-7d156301fcf5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [205cad34-1a32-4816-b090-7d156301fcf5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [205cad34-1a32-4816-b090-7d156301fcf5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002931181s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-459506 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdany-port1539084006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdspecific-port510570478/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.610593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:42:36.074195   13040 retry.go:31] will retry after 602.720022ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdspecific-port510570478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "sudo umount -f /mount-9p": exit status 1 (244.211567ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-459506 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdspecific-port510570478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T" /mount1: exit status 1 (290.315203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:42:37.907407   13040 retry.go:31] will retry after 571.451023ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-459506 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-459506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3702317322/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-459506 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-459506
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-459506
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-459506 image ls --format short --alsologtostderr:
I0926 22:47:05.598880   66811 out.go:360] Setting OutFile to fd 1 ...
I0926 22:47:05.599155   66811 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:05.599165   66811 out.go:374] Setting ErrFile to fd 2...
I0926 22:47:05.599169   66811 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:05.599325   66811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:47:05.599856   66811 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:05.599957   66811 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:05.600292   66811 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:47:05.617654   66811 ssh_runner.go:195] Run: systemctl --version
I0926 22:47:05.617693   66811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:47:05.635605   66811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:47:05.727160   66811 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-459506 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-459506  │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/minikube-local-cache-test │ functional-459506  │ sha256:11c32a │ 991B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-459506 image ls --format table --alsologtostderr:
I0926 22:47:08.542243   67899 out.go:360] Setting OutFile to fd 1 ...
I0926 22:47:08.542484   67899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:08.542494   67899 out.go:374] Setting ErrFile to fd 2...
I0926 22:47:08.542498   67899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:08.542665   67899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:47:08.543177   67899 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:08.543270   67899 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:08.543612   67899 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:47:08.560662   67899 ssh_runner.go:195] Run: systemctl --version
I0926 22:47:08.560702   67899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:47:08.576890   67899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:47:08.668043   67899 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-459506 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:11c32aa41e34525fc5edf21465f2a42c2dd40929dda21de15a85d1bcbdb087ba","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-459506"],"size":"991"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22
384805"},{"id":"sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:46169d968e9203e8b10debaf89
8210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-459506"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0f
be50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-459506 image ls --format json --alsologtostderr:
I0926 22:47:08.325839   67849 out.go:360] Setting OutFile to fd 1 ...
I0926 22:47:08.326164   67849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:08.326178   67849 out.go:374] Setting ErrFile to fd 2...
I0926 22:47:08.326191   67849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:08.326472   67849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:47:08.327274   67849 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:08.327427   67849 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:08.327999   67849 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:47:08.349166   67849 ssh_runner.go:195] Run: systemctl --version
I0926 22:47:08.349203   67849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:47:08.369018   67849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:47:08.463054   67849 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-459506 image ls --format yaml --alsologtostderr:
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-459506
size: "2372971"
- id: sha256:11c32aa41e34525fc5edf21465f2a42c2dd40929dda21de15a85d1bcbdb087ba
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-459506
size: "991"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-459506 image ls --format yaml --alsologtostderr:
I0926 22:47:05.804023   66878 out.go:360] Setting OutFile to fd 1 ...
I0926 22:47:05.804268   66878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:05.804278   66878 out.go:374] Setting ErrFile to fd 2...
I0926 22:47:05.804284   66878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:05.804468   66878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:47:05.805018   66878 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:05.805131   66878 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:05.805474   66878 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:47:05.822866   66878 ssh_runner.go:195] Run: systemctl --version
I0926 22:47:05.822918   66878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:47:05.838807   66878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:47:05.929424   66878 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-459506 ssh pgrep buildkitd: exit status 1 (256.453202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image build -t localhost/my-image:functional-459506 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 image build -t localhost/my-image:functional-459506 testdata/build --alsologtostderr: (2.911711636s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-459506 image build -t localhost/my-image:functional-459506 testdata/build --alsologtostderr:
I0926 22:47:06.279407   67112 out.go:360] Setting OutFile to fd 1 ...
I0926 22:47:06.279589   67112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:06.279599   67112 out.go:374] Setting ErrFile to fd 2...
I0926 22:47:06.279602   67112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:47:06.279797   67112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
I0926 22:47:06.280365   67112 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:06.280965   67112 config.go:182] Loaded profile config "functional-459506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0926 22:47:06.281309   67112 cli_runner.go:164] Run: docker container inspect functional-459506 --format={{.State.Status}}
I0926 22:47:06.298872   67112 ssh_runner.go:195] Run: systemctl --version
I0926 22:47:06.298929   67112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-459506
I0926 22:47:06.314907   67112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/functional-459506/id_rsa Username:docker}
I0926 22:47:06.405664   67112 build_images.go:161] Building image from path: /tmp/build.1842269399.tar
I0926 22:47:06.405741   67112 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 22:47:06.415624   67112 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1842269399.tar
I0926 22:47:06.418978   67112 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1842269399.tar: stat -c "%s %y" /var/lib/minikube/build/build.1842269399.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1842269399.tar': No such file or directory
I0926 22:47:06.419004   67112 ssh_runner.go:362] scp /tmp/build.1842269399.tar --> /var/lib/minikube/build/build.1842269399.tar (3072 bytes)
I0926 22:47:06.443821   67112 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1842269399
I0926 22:47:06.453174   67112 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1842269399 -xf /var/lib/minikube/build/build.1842269399.tar
I0926 22:47:06.462066   67112 containerd.go:394] Building image: /var/lib/minikube/build/build.1842269399
I0926 22:47:06.462126   67112 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1842269399 --local dockerfile=/var/lib/minikube/build/build.1842269399 --output type=image,name=localhost/my-image:functional-459506
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:82d099cd9deb8510e56ab473031e92a668ef0aef4870a342def697011964de06 done
#8 exporting config sha256:1c2cf4418cd5d8303a9e28acfcddecbc763b2e4d037f11cbeffc30c9ed240b2d done
#8 naming to localhost/my-image:functional-459506 done
#8 DONE 0.1s
I0926 22:47:09.118034   67112 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1842269399 --local dockerfile=/var/lib/minikube/build/build.1842269399 --output type=image,name=localhost/my-image:functional-459506: (2.655869251s)
I0926 22:47:09.118120   67112 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1842269399
I0926 22:47:09.127210   67112 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1842269399.tar
I0926 22:47:09.135501   67112 build_images.go:217] Built localhost/my-image:functional-459506 from /tmp/build.1842269399.tar
I0926 22:47:09.135534   67112 build_images.go:133] succeeded building to: functional-459506
I0926 22:47:09.135540   67112 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.72302835s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-459506
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-459506
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image load --daemon kicbase/echo-server:functional-459506 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image save kicbase/echo-server:functional-459506 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image rm kicbase/echo-server:functional-459506 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-459506
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 image save --daemon kicbase/echo-server:functional-459506 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-459506
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 service list: (1.674199089s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-459506 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-459506 service list -o json: (1.679867187s)
functional_test.go:1504: Took "1.679996115s" to run "out/minikube-linux-amd64 -p functional-459506 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-459506
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-459506
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-459506
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (91.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0926 22:53:20.830086   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m30.938304317s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (91.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (18.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 kubectl -- rollout status deployment/busybox: (16.447095942s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-4ncjg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-5cgzc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-h866s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-4ncjg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-5cgzc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-h866s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-4ncjg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-5cgzc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-h866s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (18.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-4ncjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-4ncjg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-5cgzc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-5cgzc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-h866s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 kubectl -- exec busybox-7b57f96db7-h866s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (12.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 node add --alsologtostderr -v 5: (11.760103463s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (12.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-090263 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp testdata/cp-test.txt ha-090263:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile197953706/001/cp-test_ha-090263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263:/home/docker/cp-test.txt ha-090263-m02:/home/docker/cp-test_ha-090263_ha-090263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test_ha-090263_ha-090263-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263:/home/docker/cp-test.txt ha-090263-m03:/home/docker/cp-test_ha-090263_ha-090263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test_ha-090263_ha-090263-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263:/home/docker/cp-test.txt ha-090263-m04:/home/docker/cp-test_ha-090263_ha-090263-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test_ha-090263_ha-090263-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp testdata/cp-test.txt ha-090263-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile197953706/001/cp-test_ha-090263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m02:/home/docker/cp-test.txt ha-090263:/home/docker/cp-test_ha-090263-m02_ha-090263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test_ha-090263-m02_ha-090263.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m02:/home/docker/cp-test.txt ha-090263-m03:/home/docker/cp-test_ha-090263-m02_ha-090263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test_ha-090263-m02_ha-090263-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m02:/home/docker/cp-test.txt ha-090263-m04:/home/docker/cp-test_ha-090263-m02_ha-090263-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test_ha-090263-m02_ha-090263-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp testdata/cp-test.txt ha-090263-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile197953706/001/cp-test_ha-090263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m03:/home/docker/cp-test.txt ha-090263:/home/docker/cp-test_ha-090263-m03_ha-090263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test_ha-090263-m03_ha-090263.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m03:/home/docker/cp-test.txt ha-090263-m02:/home/docker/cp-test_ha-090263-m03_ha-090263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test_ha-090263-m03_ha-090263-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m03:/home/docker/cp-test.txt ha-090263-m04:/home/docker/cp-test_ha-090263-m03_ha-090263-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test_ha-090263-m03_ha-090263-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp testdata/cp-test.txt ha-090263-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile197953706/001/cp-test_ha-090263-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m04:/home/docker/cp-test.txt ha-090263:/home/docker/cp-test_ha-090263-m04_ha-090263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263 "sudo cat /home/docker/cp-test_ha-090263-m04_ha-090263.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m04:/home/docker/cp-test.txt ha-090263-m02:/home/docker/cp-test_ha-090263-m04_ha-090263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m02 "sudo cat /home/docker/cp-test_ha-090263-m04_ha-090263-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 cp ha-090263-m04:/home/docker/cp-test.txt ha-090263-m03:/home/docker/cp-test_ha-090263-m04_ha-090263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 ssh -n ha-090263-m03 "sudo cat /home/docker/cp-test_ha-090263-m04_ha-090263-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 node stop m02 --alsologtostderr -v 5: (11.884785558s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5: exit status 7 (634.447033ms)

                                                
                                                
-- stdout --
	ha-090263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090263-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:55:46.948031   91549 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:55:46.948143   91549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:55:46.948152   91549 out.go:374] Setting ErrFile to fd 2...
	I0926 22:55:46.948156   91549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:55:46.948347   91549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:55:46.948501   91549 out.go:368] Setting JSON to false
	I0926 22:55:46.948536   91549 mustload.go:65] Loading cluster: ha-090263
	I0926 22:55:46.948651   91549 notify.go:220] Checking for updates...
	I0926 22:55:46.948906   91549 config.go:182] Loaded profile config "ha-090263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:55:46.948923   91549 status.go:174] checking status of ha-090263 ...
	I0926 22:55:46.949277   91549 cli_runner.go:164] Run: docker container inspect ha-090263 --format={{.State.Status}}
	I0926 22:55:46.969802   91549 status.go:371] ha-090263 host status = "Running" (err=<nil>)
	I0926 22:55:46.969825   91549 host.go:66] Checking if "ha-090263" exists ...
	I0926 22:55:46.970121   91549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-090263
	I0926 22:55:46.987094   91549 host.go:66] Checking if "ha-090263" exists ...
	I0926 22:55:46.987307   91549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:55:46.987353   91549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-090263
	I0926 22:55:47.004732   91549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/ha-090263/id_rsa Username:docker}
	I0926 22:55:47.096637   91549 ssh_runner.go:195] Run: systemctl --version
	I0926 22:55:47.100680   91549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:55:47.111508   91549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:55:47.163343   91549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 22:55:47.153029104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:55:47.163885   91549 kubeconfig.go:125] found "ha-090263" server: "https://192.168.49.254:8443"
	I0926 22:55:47.163917   91549 api_server.go:166] Checking apiserver status ...
	I0926 22:55:47.163947   91549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:55:47.174993   91549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup
	W0926 22:55:47.184349   91549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 22:55:47.184391   91549 ssh_runner.go:195] Run: ls
	I0926 22:55:47.187870   91549 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 22:55:47.191723   91549 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 22:55:47.191743   91549 status.go:463] ha-090263 apiserver status = Running (err=<nil>)
	I0926 22:55:47.191770   91549 status.go:176] ha-090263 status: &{Name:ha-090263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:55:47.191790   91549 status.go:174] checking status of ha-090263-m02 ...
	I0926 22:55:47.192043   91549 cli_runner.go:164] Run: docker container inspect ha-090263-m02 --format={{.State.Status}}
	I0926 22:55:47.208652   91549 status.go:371] ha-090263-m02 host status = "Stopped" (err=<nil>)
	I0926 22:55:47.208674   91549 status.go:384] host is not running, skipping remaining checks
	I0926 22:55:47.208681   91549 status.go:176] ha-090263-m02 status: &{Name:ha-090263-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:55:47.208700   91549 status.go:174] checking status of ha-090263-m03 ...
	I0926 22:55:47.208959   91549 cli_runner.go:164] Run: docker container inspect ha-090263-m03 --format={{.State.Status}}
	I0926 22:55:47.224397   91549 status.go:371] ha-090263-m03 host status = "Running" (err=<nil>)
	I0926 22:55:47.224421   91549 host.go:66] Checking if "ha-090263-m03" exists ...
	I0926 22:55:47.224695   91549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-090263-m03
	I0926 22:55:47.240767   91549 host.go:66] Checking if "ha-090263-m03" exists ...
	I0926 22:55:47.240966   91549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:55:47.240998   91549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-090263-m03
	I0926 22:55:47.258595   91549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/ha-090263-m03/id_rsa Username:docker}
	I0926 22:55:47.349296   91549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:55:47.360433   91549 kubeconfig.go:125] found "ha-090263" server: "https://192.168.49.254:8443"
	I0926 22:55:47.360456   91549 api_server.go:166] Checking apiserver status ...
	I0926 22:55:47.360486   91549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:55:47.370632   91549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	W0926 22:55:47.379802   91549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 22:55:47.379851   91549 ssh_runner.go:195] Run: ls
	I0926 22:55:47.383265   91549 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 22:55:47.387114   91549 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 22:55:47.387132   91549 status.go:463] ha-090263-m03 apiserver status = Running (err=<nil>)
	I0926 22:55:47.387140   91549 status.go:176] ha-090263-m03 status: &{Name:ha-090263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:55:47.387152   91549 status.go:174] checking status of ha-090263-m04 ...
	I0926 22:55:47.387370   91549 cli_runner.go:164] Run: docker container inspect ha-090263-m04 --format={{.State.Status}}
	I0926 22:55:47.404180   91549 status.go:371] ha-090263-m04 host status = "Running" (err=<nil>)
	I0926 22:55:47.404195   91549 host.go:66] Checking if "ha-090263-m04" exists ...
	I0926 22:55:47.404389   91549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-090263-m04
	I0926 22:55:47.420919   91549 host.go:66] Checking if "ha-090263-m04" exists ...
	I0926 22:55:47.421165   91549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:55:47.421221   91549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-090263-m04
	I0926 22:55:47.437954   91549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/ha-090263-m04/id_rsa Username:docker}
	I0926 22:55:47.528195   91549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:55:47.539095   91549 status.go:176] ha-090263-m04 status: &{Name:ha-090263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 node start m02 --alsologtostderr -v 5: (7.52961932s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (86.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 stop --alsologtostderr -v 5: (30.56691569s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 start --wait true --alsologtostderr -v 5
E0926 22:56:57.766713   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.555423   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.561793   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.573100   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.594406   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.635710   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.717012   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:00.878492   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:01.200217   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:01.841956   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:03.123946   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:05.685307   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:10.807242   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:57:21.048663   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 start --wait true --alsologtostderr -v 5: (56.030422971s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (86.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 node delete m03 --alsologtostderr -v 5: (8.157961071s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 stop --alsologtostderr -v 5
E0926 22:57:41.530077   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 stop --alsologtostderr -v 5: (35.594655599s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5: exit status 7 (97.82504ms)

                                                
                                                
-- stdout --
	ha-090263
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090263-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090263-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:58:09.285688  107877 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:58:09.285801  107877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:58:09.285810  107877 out.go:374] Setting ErrFile to fd 2...
	I0926 22:58:09.285815  107877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:58:09.286034  107877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 22:58:09.286198  107877 out.go:368] Setting JSON to false
	I0926 22:58:09.286231  107877 mustload.go:65] Loading cluster: ha-090263
	I0926 22:58:09.286324  107877 notify.go:220] Checking for updates...
	I0926 22:58:09.286580  107877 config.go:182] Loaded profile config "ha-090263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 22:58:09.286595  107877 status.go:174] checking status of ha-090263 ...
	I0926 22:58:09.286966  107877 cli_runner.go:164] Run: docker container inspect ha-090263 --format={{.State.Status}}
	I0926 22:58:09.306222  107877 status.go:371] ha-090263 host status = "Stopped" (err=<nil>)
	I0926 22:58:09.306242  107877 status.go:384] host is not running, skipping remaining checks
	I0926 22:58:09.306248  107877 status.go:176] ha-090263 status: &{Name:ha-090263 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:58:09.306270  107877 status.go:174] checking status of ha-090263-m02 ...
	I0926 22:58:09.306484  107877 cli_runner.go:164] Run: docker container inspect ha-090263-m02 --format={{.State.Status}}
	I0926 22:58:09.323325  107877 status.go:371] ha-090263-m02 host status = "Stopped" (err=<nil>)
	I0926 22:58:09.323343  107877 status.go:384] host is not running, skipping remaining checks
	I0926 22:58:09.323355  107877 status.go:176] ha-090263-m02 status: &{Name:ha-090263-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:58:09.323374  107877 status.go:174] checking status of ha-090263-m04 ...
	I0926 22:58:09.323595  107877 cli_runner.go:164] Run: docker container inspect ha-090263-m04 --format={{.State.Status}}
	I0926 22:58:09.339578  107877 status.go:371] ha-090263-m04 host status = "Stopped" (err=<nil>)
	I0926 22:58:09.339595  107877 status.go:384] host is not running, skipping remaining checks
	I0926 22:58:09.339602  107877 status.go:176] ha-090263-m04 status: &{Name:ha-090263-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0926 22:58:22.491353   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (50.587287446s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (31.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-090263 node add --control-plane --alsologtostderr -v 5: (30.543124034s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-090263 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (31.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-421730 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0926 22:59:44.414968   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-421730 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (44.115314127s)
--- PASS: TestJSONOutput/start/Command (44.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-421730 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-421730 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.65s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-421730 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-421730 --output=json --user=testUser: (5.650997387s)
--- PASS: TestJSONOutput/stop/Command (5.65s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-200424 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-200424 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (56.484225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"697df5a3-5475-48e9-9db0-fc46cbf8580f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-200424] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c1d144d-38e6-462a-830e-91bc5ff0c16d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"008fc19a-a02d-46f7-9947-96d83b472023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd07a956-839b-4395-abe7-fd83e844fabe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig"}}
	{"specversion":"1.0","id":"72b7cc63-5db1-46d7-85b9-14f7bb4e5836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube"}}
	{"specversion":"1.0","id":"82c4bfa4-b19e-4b9b-8207-26642ce0d92b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"481583f2-3b8a-474f-a823-5c82d8111a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f714c3d-e73c-4608-8f4a-480de82ba049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-200424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-200424
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-132457 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-132457 --network=: (31.892617426s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-132457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-132457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-132457: (2.060142358s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.97s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-424385 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-424385 --network=bridge: (20.982041405s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-424385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-424385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-424385: (1.920432154s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.92s)

                                                
                                    
x
+
TestKicExistingNetwork (21.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0926 23:01:34.189986   13040 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0926 23:01:34.206121   13040 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0926 23:01:34.206201   13040 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0926 23:01:34.206221   13040 cli_runner.go:164] Run: docker network inspect existing-network
W0926 23:01:34.223343   13040 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0926 23:01:34.223378   13040 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0926 23:01:34.223395   13040 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0926 23:01:34.223518   13040 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0926 23:01:34.239892   13040 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2261b2191090 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:5d:12:aa:39:a5} reservation:<nil>}
I0926 23:01:34.240323   13040 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f5f600}
I0926 23:01:34.240356   13040 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0926 23:01:34.240405   13040 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0926 23:01:34.295407   13040 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-426252 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-426252 --network=existing-network: (19.951252799s)
helpers_test.go:175: Cleaning up "existing-network-426252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-426252
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-426252: (1.889475804s)
I0926 23:01:56.152868   13040 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (21.98s)

                                                
                                    
x
+
TestKicCustomSubnet (22.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-032647 --subnet=192.168.60.0/24
E0926 23:01:57.765465   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:00.555926   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-032647 --subnet=192.168.60.0/24: (20.451089488s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-032647 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-032647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-032647
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-032647: (2.048187133s)
--- PASS: TestKicCustomSubnet (22.52s)

                                                
                                    
x
+
TestKicStaticIP (22.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-592518 --static-ip=192.168.200.200
E0926 23:02:28.256307   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-592518 --static-ip=192.168.200.200: (20.239408651s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-592518 ip
helpers_test.go:175: Cleaning up "static-ip-592518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-592518
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-592518: (2.033034399s)
--- PASS: TestKicStaticIP (22.40s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (44.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-740103 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-740103 --driver=docker  --container-runtime=containerd: (19.336599498s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-759438 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-759438 --driver=docker  --container-runtime=containerd: (19.400131315s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-740103
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-759438
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-759438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-759438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-759438: (2.23228824s)
helpers_test.go:175: Cleaning up "first-740103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-740103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-740103: (2.276504955s)
--- PASS: TestMinikubeProfile (44.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-378696 --memory=3072 --mount-string /tmp/TestMountStartserial2926469206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-378696 --memory=3072 --mount-string /tmp/TestMountStartserial2926469206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.159187198s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-378696 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-391477 --memory=3072 --mount-string /tmp/TestMountStartserial2926469206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-391477 --memory=3072 --mount-string /tmp/TestMountStartserial2926469206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.463982106s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-378696 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-378696 --alsologtostderr -v=5: (1.604874289s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-391477
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-391477: (1.164020017s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-391477
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-391477: (6.673466764s)
--- PASS: TestMountStart/serial/RestartStopped (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (54.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553425 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553425 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.157739119s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (54.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-553425 -- rollout status deployment/busybox: (16.831115036s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-8j2k9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-w9zrk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-8j2k9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-w9zrk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-8j2k9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-w9zrk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-8j2k9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-8j2k9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-w9zrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553425 -- exec busybox-7b57f96db7-w9zrk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (12.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553425 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-553425 -v=5 --alsologtostderr: (11.639286895s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (12.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-553425 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp testdata/cp-test.txt multinode-553425:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3867620111/001/cp-test_multinode-553425.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425:/home/docker/cp-test.txt multinode-553425-m02:/home/docker/cp-test_multinode-553425_multinode-553425-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test_multinode-553425_multinode-553425-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425:/home/docker/cp-test.txt multinode-553425-m03:/home/docker/cp-test_multinode-553425_multinode-553425-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test_multinode-553425_multinode-553425-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp testdata/cp-test.txt multinode-553425-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3867620111/001/cp-test_multinode-553425-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m02:/home/docker/cp-test.txt multinode-553425:/home/docker/cp-test_multinode-553425-m02_multinode-553425.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test_multinode-553425-m02_multinode-553425.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m02:/home/docker/cp-test.txt multinode-553425-m03:/home/docker/cp-test_multinode-553425-m02_multinode-553425-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test_multinode-553425-m02_multinode-553425-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp testdata/cp-test.txt multinode-553425-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3867620111/001/cp-test_multinode-553425-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m03:/home/docker/cp-test.txt multinode-553425:/home/docker/cp-test_multinode-553425-m03_multinode-553425.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425 "sudo cat /home/docker/cp-test_multinode-553425-m03_multinode-553425.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 cp multinode-553425-m03:/home/docker/cp-test.txt multinode-553425-m02:/home/docker/cp-test_multinode-553425-m03_multinode-553425-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 ssh -n multinode-553425-m02 "sudo cat /home/docker/cp-test_multinode-553425-m03_multinode-553425-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-553425 node stop m03: (1.209106092s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553425 status: exit status 7 (450.889662ms)

                                                
                                                
-- stdout --
	multinode-553425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553425-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553425-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr: exit status 7 (457.513638ms)

                                                
                                                
-- stdout --
	multinode-553425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553425-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553425-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:05:26.248040  170566 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:05:26.248141  170566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:05:26.248154  170566 out.go:374] Setting ErrFile to fd 2...
	I0926 23:05:26.248161  170566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:05:26.248329  170566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 23:05:26.248485  170566 out.go:368] Setting JSON to false
	I0926 23:05:26.248519  170566 mustload.go:65] Loading cluster: multinode-553425
	I0926 23:05:26.248672  170566 notify.go:220] Checking for updates...
	I0926 23:05:26.248887  170566 config.go:182] Loaded profile config "multinode-553425": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:05:26.248901  170566 status.go:174] checking status of multinode-553425 ...
	I0926 23:05:26.249303  170566 cli_runner.go:164] Run: docker container inspect multinode-553425 --format={{.State.Status}}
	I0926 23:05:26.266254  170566 status.go:371] multinode-553425 host status = "Running" (err=<nil>)
	I0926 23:05:26.266267  170566 host.go:66] Checking if "multinode-553425" exists ...
	I0926 23:05:26.266494  170566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-553425
	I0926 23:05:26.283566  170566 host.go:66] Checking if "multinode-553425" exists ...
	I0926 23:05:26.283811  170566 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:05:26.283840  170566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-553425
	I0926 23:05:26.301254  170566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/multinode-553425/id_rsa Username:docker}
	I0926 23:05:26.392252  170566 ssh_runner.go:195] Run: systemctl --version
	I0926 23:05:26.396486  170566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:05:26.407252  170566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:05:26.462743  170566 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-26 23:05:26.45388974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:05:26.463299  170566 kubeconfig.go:125] found "multinode-553425" server: "https://192.168.67.2:8443"
	I0926 23:05:26.463327  170566 api_server.go:166] Checking apiserver status ...
	I0926 23:05:26.463357  170566 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:05:26.474412  170566 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0926 23:05:26.483295  170566 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:05:26.483330  170566 ssh_runner.go:195] Run: ls
	I0926 23:05:26.486590  170566 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0926 23:05:26.491581  170566 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0926 23:05:26.491602  170566 status.go:463] multinode-553425 apiserver status = Running (err=<nil>)
	I0926 23:05:26.491611  170566 status.go:176] multinode-553425 status: &{Name:multinode-553425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:05:26.491629  170566 status.go:174] checking status of multinode-553425-m02 ...
	I0926 23:05:26.491880  170566 cli_runner.go:164] Run: docker container inspect multinode-553425-m02 --format={{.State.Status}}
	I0926 23:05:26.508133  170566 status.go:371] multinode-553425-m02 host status = "Running" (err=<nil>)
	I0926 23:05:26.508151  170566 host.go:66] Checking if "multinode-553425-m02" exists ...
	I0926 23:05:26.508432  170566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-553425-m02
	I0926 23:05:26.524623  170566 host.go:66] Checking if "multinode-553425-m02" exists ...
	I0926 23:05:26.524857  170566 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:05:26.524889  170566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-553425-m02
	I0926 23:05:26.541172  170566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21642-9508/.minikube/machines/multinode-553425-m02/id_rsa Username:docker}
	I0926 23:05:26.632507  170566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:05:26.643165  170566 status.go:176] multinode-553425-m02 status: &{Name:multinode-553425-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:05:26.643203  170566 status.go:174] checking status of multinode-553425-m03 ...
	I0926 23:05:26.643470  170566 cli_runner.go:164] Run: docker container inspect multinode-553425-m03 --format={{.State.Status}}
	I0926 23:05:26.660109  170566 status.go:371] multinode-553425-m03 host status = "Stopped" (err=<nil>)
	I0926 23:05:26.660123  170566 status.go:384] host is not running, skipping remaining checks
	I0926 23:05:26.660128  170566 status.go:176] multinode-553425-m03 status: &{Name:multinode-553425-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-553425 node start m03 -v=5 --alsologtostderr: (6.146997184s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553425
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-553425
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-553425: (24.768423877s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553425 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553425 --wait=true -v=5 --alsologtostderr: (47.28122582s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553425
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-553425 node delete m03: (4.492200008s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 stop
E0926 23:06:57.768870   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:07:00.555443   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-553425 stop: (23.639545954s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553425 status: exit status 7 (77.915086ms)

                                                
                                                
-- stdout --
	multinode-553425
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-553425-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr: exit status 7 (80.643883ms)

                                                
                                                
-- stdout --
	multinode-553425
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-553425-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:07:14.389572  180125 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:07:14.389661  180125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:14.389672  180125 out.go:374] Setting ErrFile to fd 2...
	I0926 23:07:14.389677  180125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:14.389907  180125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 23:07:14.390109  180125 out.go:368] Setting JSON to false
	I0926 23:07:14.390150  180125 mustload.go:65] Loading cluster: multinode-553425
	I0926 23:07:14.390196  180125 notify.go:220] Checking for updates...
	I0926 23:07:14.390562  180125 config.go:182] Loaded profile config "multinode-553425": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:07:14.390578  180125 status.go:174] checking status of multinode-553425 ...
	I0926 23:07:14.391109  180125 cli_runner.go:164] Run: docker container inspect multinode-553425 --format={{.State.Status}}
	I0926 23:07:14.411236  180125 status.go:371] multinode-553425 host status = "Stopped" (err=<nil>)
	I0926 23:07:14.411271  180125 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:14.411285  180125 status.go:176] multinode-553425 status: &{Name:multinode-553425 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:07:14.411319  180125 status.go:174] checking status of multinode-553425-m02 ...
	I0926 23:07:14.411540  180125 cli_runner.go:164] Run: docker container inspect multinode-553425-m02 --format={{.State.Status}}
	I0926 23:07:14.427899  180125 status.go:371] multinode-553425-m02 host status = "Stopped" (err=<nil>)
	I0926 23:07:14.427915  180125 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:14.427920  180125 status.go:176] multinode-553425-m02 status: &{Name:multinode-553425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553425 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553425 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.584513861s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553425 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553425
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553425-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-553425-m02 --driver=docker  --container-runtime=containerd: exit status 14 (59.777638ms)

                                                
                                                
-- stdout --
	* [multinode-553425-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-553425-m02' is duplicated with machine name 'multinode-553425-m02' in profile 'multinode-553425'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553425-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553425-m03 --driver=docker  --container-runtime=containerd: (19.558389873s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553425
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-553425: exit status 80 (267.010487ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-553425 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-553425-m03 already exists in multinode-553425-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-553425-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-553425-m03: (2.23758665s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.17s)

                                                
                                    
x
+
TestPreload (116.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-868201 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-868201 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (48.618209578s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-868201 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-868201 image pull gcr.io/k8s-minikube/busybox: (2.406008905s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-868201
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-868201: (5.541166868s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-868201 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0926 23:10:00.832372   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-868201 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (57.477103641s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-868201 image list
helpers_test.go:175: Cleaning up "test-preload-868201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-868201
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-868201: (2.340154829s)
--- PASS: TestPreload (116.59s)

                                                
                                    
x
+
TestScheduledStopUnix (95.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-697543 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-697543 --memory=3072 --driver=docker  --container-runtime=containerd: (20.09859534s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697543 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-697543 -n scheduled-stop-697543
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697543 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0926 23:10:46.771666   13040 retry.go:31] will retry after 126.072µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.772814   13040 retry.go:31] will retry after 79.535µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.773969   13040 retry.go:31] will retry after 128.241µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.775090   13040 retry.go:31] will retry after 353.56µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.776202   13040 retry.go:31] will retry after 673.894µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.777337   13040 retry.go:31] will retry after 825.332µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.778454   13040 retry.go:31] will retry after 665.551µs: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.779575   13040 retry.go:31] will retry after 1.385754ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.781778   13040 retry.go:31] will retry after 1.71884ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.783959   13040 retry.go:31] will retry after 4.392508ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.789209   13040 retry.go:31] will retry after 5.109052ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.795426   13040 retry.go:31] will retry after 4.587028ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.800616   13040 retry.go:31] will retry after 11.696873ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.812815   13040 retry.go:31] will retry after 21.139151ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
I0926 23:10:46.834240   13040 retry.go:31] will retry after 27.529571ms: open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/scheduled-stop-697543/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697543 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697543 -n scheduled-stop-697543
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-697543
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697543 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-697543
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-697543: exit status 7 (63.275681ms)

                                                
                                                
-- stdout --
	scheduled-stop-697543
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697543 -n scheduled-stop-697543
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697543 -n scheduled-stop-697543: exit status 7 (61.447698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-697543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-697543
E0926 23:11:57.765879   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:12:00.555080   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-697543: (4.199213671s)
--- PASS: TestScheduledStopUnix (95.55s)

                                                
                                    
x
+
TestInsufficientStorage (8.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-878592 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-878592 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.575491625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59ca2465-6f81-476a-92a8-20469ab3ae96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-878592] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2192284-b474-4b9b-9b6f-63b8ef281065","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"b47469e8-727a-4c7c-a5e4-1674bd81b7f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"820c8188-1833-442d-9969-b66b9ed1cf5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig"}}
	{"specversion":"1.0","id":"933f542e-e844-408f-b4b0-283ece108457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube"}}
	{"specversion":"1.0","id":"765db8a7-82db-485d-9093-71fbbebdeab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"210c8642-fc42-496b-94df-365625c369fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce55fac8-288e-42d9-bede-2a26633ef843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"204ef2a7-735f-4ce0-89cf-a58ff0672607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"747b72c4-21e7-4aeb-8184-52c19e193943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7f701cd-42ac-4e92-aa1f-5fe47e6da21d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"10c60f3a-e942-4204-9519-41923504e451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-878592\" primary control-plane node in \"insufficient-storage-878592\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddf99db9-946a-4a1e-9797-61a4f2f9b326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8971702-82e5-4215-a438-3f27b65acc82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b1084be-6580-484b-ba17-536a235dfc3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-878592 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-878592 --output=json --layout=cluster: exit status 7 (259.007411ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-878592","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-878592","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:12:08.665387  202207 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-878592" does not appear in /home/jenkins/minikube-integration/21642-9508/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-878592 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-878592 --output=json --layout=cluster: exit status 7 (251.933983ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-878592","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-878592","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:12:08.918580  202310 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-878592" does not appear in /home/jenkins/minikube-integration/21642-9508/kubeconfig
	E0926 23:12:08.928876  202310 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/insufficient-storage-878592/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-878592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-878592
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-878592: (1.853613972s)
--- PASS: TestInsufficientStorage (8.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.345260311 start -p running-upgrade-866381 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E0926 23:13:23.618489   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.345260311 start -p running-upgrade-866381 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (24.71561106s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-866381 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-866381 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.802066315s)
helpers_test.go:175: Cleaning up "running-upgrade-866381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-866381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-866381: (2.383985394s)
--- PASS: TestRunningBinaryUpgrade (53.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (80.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1999128690 start -p missing-upgrade-522237 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1999128690 start -p missing-upgrade-522237 --memory=3072 --driver=docker  --container-runtime=containerd: (21.211941762s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-522237
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-522237
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-522237 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-522237 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (53.564969656s)
helpers_test.go:175: Cleaning up "missing-upgrade-522237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-522237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-522237: (2.084211706s)
--- PASS: TestMissingContainerUpgrade (80.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (69.840089ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-072490] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-072490 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-072490 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.967646139s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-072490 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3502656618 start -p stopped-upgrade-087454 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3502656618 start -p stopped-upgrade-087454 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (45.754863815s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3502656618 -p stopped-upgrade-087454 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3502656618 -p stopped-upgrade-087454 stop: (1.205978805s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-087454 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-087454 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.084648481s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.394099294s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-072490 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-072490 status -o json: exit status 2 (350.911983ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-072490","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-072490
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-072490: (2.097779469s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-072490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.103319079s)
--- PASS: TestNoKubernetes/serial/Start (8.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-087454
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-087454: (1.12052243s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-072490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-072490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.800918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-072490
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-072490: (1.176426505s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-072490 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-072490 --driver=docker  --container-runtime=containerd: (7.232262891s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-072490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-072490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.621468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-708263 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-708263 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (485.329125ms)

                                                
                                                
-- stdout --
	* [false-708263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:13:35.133824  231613 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:13:35.136657  231613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:13:35.136781  231613 out.go:374] Setting ErrFile to fd 2...
	I0926 23:13:35.136793  231613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:13:35.137276  231613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-9508/.minikube/bin
	I0926 23:13:35.138181  231613 out.go:368] Setting JSON to false
	I0926 23:13:35.139621  231613 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3350,"bootTime":1758925065,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:13:35.139787  231613 start.go:140] virtualization: kvm guest
	I0926 23:13:35.142958  231613 out.go:179] * [false-708263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:13:35.144535  231613 notify.go:220] Checking for updates...
	I0926 23:13:35.144576  231613 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:13:35.145914  231613 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:13:35.147249  231613 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-9508/kubeconfig
	I0926 23:13:35.148233  231613 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-9508/.minikube
	I0926 23:13:35.149481  231613 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:13:35.150910  231613 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:13:35.152776  231613 config.go:182] Loaded profile config "kubernetes-upgrade-655811": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0926 23:13:35.153029  231613 config.go:182] Loaded profile config "missing-upgrade-522237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I0926 23:13:35.153226  231613 config.go:182] Loaded profile config "running-upgrade-866381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I0926 23:13:35.153398  231613 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:13:35.186665  231613 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:13:35.186848  231613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:13:35.241688  231613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:84 SystemTime:2025-09-26 23:13:35.231930189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:13:35.241834  231613 docker.go:318] overlay module found
	I0926 23:13:35.306461  231613 out.go:179] * Using the docker driver based on user configuration
	I0926 23:13:35.394317  231613 start.go:304] selected driver: docker
	I0926 23:13:35.394339  231613 start.go:924] validating driver "docker" against <nil>
	I0926 23:13:35.394353  231613 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:13:35.491081  231613 out.go:203] 
	W0926 23:13:35.539807  231613 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0926 23:13:35.541336  231613 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-708263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-522237
contexts:
- context:
cluster: missing-upgrade-522237
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-522237
name: missing-upgrade-522237
current-context: ""
kind: Config
users:
- name: missing-upgrade-522237
user:
client-certificate: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.crt
client-key: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-708263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708263"

                                                
                                                
----------------------- debugLogs end: false-708263 [took: 4.267028078s] --------------------------------
helpers_test.go:175: Cleaning up "false-708263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-708263
--- PASS: TestNetworkPlugins/group/false (4.93s)

                                                
                                    
x
+
TestPause/serial/Start (46.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-930820 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-930820 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (46.293070334s)
--- PASS: TestPause/serial/Start (46.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-011002 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-011002 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.022973977s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-930820 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-930820 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.276307236s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.29s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-930820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-930820 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-930820 --output=json --layout=cluster: exit status 2 (283.010848ms)

                                                
                                                
-- stdout --
	{"Name":"pause-930820","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-930820","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-930820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-930820 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-930820 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-930820 --alsologtostderr -v=5: (2.581157734s)
--- PASS: TestPause/serial/DeletePaused (2.58s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.01s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.960523906s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-930820
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-930820: exit status 1 (15.97302ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-930820: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-011002 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ce248f1d-6bcc-4afc-9736-b5abecc42358] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ce248f1d-6bcc-4afc-9736-b5abecc42358] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003143699s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-011002 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-011002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-011002 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-011002 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-011002 --alsologtostderr -v=3: (13.209540627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (48.988451671s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-011002 -n old-k8s-version-011002
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-011002 -n old-k8s-version-011002: exit status 7 (70.752253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-011002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-011002 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-011002 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.660816369s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-011002 -n old-k8s-version-011002
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-703674 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [58fd085f-3fd3-49ac-970a-8fc243f69982] Pending
helpers_test.go:352: "busybox" [58fd085f-3fd3-49ac-970a-8fc243f69982] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [58fd085f-3fd3-49ac-970a-8fc243f69982] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003381698s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-703674 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-703674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-703674 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-703674 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-703674 --alsologtostderr -v=3: (11.938629549s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rqfsl" [520bda15-b1fe-4385-ab96-59e5c4d0706a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002675177s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rqfsl" [520bda15-b1fe-4385-ab96-59e5c4d0706a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003189866s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-011002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-011002 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-011002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-011002 -n old-k8s-version-011002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-011002 -n old-k8s-version-011002: exit status 2 (318.143235ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-011002 -n old-k8s-version-011002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-011002 -n old-k8s-version-011002: exit status 2 (280.493537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-011002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-011002 -n old-k8s-version-011002
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-011002 -n old-k8s-version-011002
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703674 -n no-preload-703674
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703674 -n no-preload-703674: exit status 7 (61.758743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-703674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0926 23:16:57.765606   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.861301228s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703674 -n no-preload-703674
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-713018 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-713018 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (46.450146149s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-682182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-682182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (42.843150478s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cwtdc" [24c1dc60-bb47-46a2-8552-6413bc74804d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003158135s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-713018 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2a8dbefb-b28b-4fed-9a6e-09e72472c4c3] Pending
helpers_test.go:352: "busybox" [2a8dbefb-b28b-4fed-9a6e-09e72472c4c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2a8dbefb-b28b-4fed-9a6e-09e72472c4c3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003135315s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-713018 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cwtdc" [24c1dc60-bb47-46a2-8552-6413bc74804d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003394022s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-703674 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-713018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-713018 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-713018 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-713018 --alsologtostderr -v=3: (12.052485394s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-703674 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-703674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703674 -n no-preload-703674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703674 -n no-preload-703674: exit status 2 (291.904745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703674 -n no-preload-703674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703674 -n no-preload-703674: exit status 2 (280.662582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-703674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703674 -n no-preload-703674
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703674 -n no-preload-703674
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-682182 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c27ede9-6c57-4bc1-ac19-cd080c90d2a1] Pending
helpers_test.go:352: "busybox" [7c27ede9-6c57-4bc1-ac19-cd080c90d2a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c27ede9-6c57-4bc1-ac19-cd080c90d2a1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.002926602s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-682182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-611405 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-611405 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (27.727800284s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-682182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-682182 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-682182 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-682182 --alsologtostderr -v=3: (11.94303332s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713018 -n embed-certs-713018
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713018 -n embed-certs-713018: exit status 7 (80.09121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-713018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-713018 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-713018 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.003709706s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713018 -n embed-certs-713018
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182: exit status 7 (82.427348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-682182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-682182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-682182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (45.375356843s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-611405 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-611405 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-611405 --alsologtostderr -v=3: (1.221981531s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611405 -n newest-cni-611405
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611405 -n newest-cni-611405: exit status 7 (64.593404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-611405 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-611405 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-611405 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (11.129087659s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611405 -n newest-cni-611405
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-611405 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-611405 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611405 -n newest-cni-611405
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611405 -n newest-cni-611405: exit status 2 (278.053917ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611405 -n newest-cni-611405
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611405 -n newest-cni-611405: exit status 2 (284.915238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-611405 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611405 -n newest-cni-611405
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611405 -n newest-cni-611405
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (44.459434084s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcw7h" [989ea6d3-af38-479e-ba1c-93cf6d489999] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004003348s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcw7h" [989ea6d3-af38-479e-ba1c-93cf6d489999] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003705724s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-713018 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kb5s9" [ffebfcd9-62e8-430a-b9d1-0779748ee93d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003688978s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-713018 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-713018 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713018 -n embed-certs-713018
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713018 -n embed-certs-713018: exit status 2 (287.069396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-713018 -n embed-certs-713018
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-713018 -n embed-certs-713018: exit status 2 (288.655605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-713018 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713018 -n embed-certs-713018
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-713018 -n embed-certs-713018
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kb5s9" [ffebfcd9-62e8-430a-b9d1-0779748ee93d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004076726s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-682182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m13.427338943s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-682182 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-682182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182: exit status 2 (294.082063ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182: exit status 2 (286.754042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-682182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-682182 --alsologtostderr -v=1: (1.076880213s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-682182 -n default-k8s-diff-port-682182
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (45.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (45.647758724s)
--- PASS: TestNetworkPlugins/group/calico/Start (45.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-708263 "pgrep -a kubelet"
I0926 23:19:36.349652   13040 config.go:182] Loaded profile config "auto-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fj4f2" [fbed35b3-703e-4c5c-bdc3-f51ea7575bba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fj4f2" [fbed35b3-703e-4c5c-bdc3-f51ea7575bba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.002478393s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (41.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (41.928355321s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (41.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5rx8k" [d8f187fe-5bd8-4119-b5d2-c3c8fa955db6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5rx8k" [d8f187fe-5bd8-4119-b5d2-c3c8fa955db6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004269619s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-708263 "pgrep -a kubelet"
I0926 23:20:17.464692   13040 config.go:182] Loaded profile config "calico-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rnx58" [d5bfdbdf-3b59-4d6a-b946-73c5c85c1686] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rnx58" [d5bfdbdf-3b59-4d6a-b946-73c5c85c1686] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003316256s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wqc96" [6e060acb-3985-4d19-ac59-e5f1a51fb23f] Running
E0926 23:20:32.365206   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.371703   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.382983   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.404366   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.445921   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.527328   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:32.688796   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:33.010091   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:33.651952   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:20:34.933873   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003836193s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-708263 "pgrep -a kubelet"
E0926 23:20:37.495453   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0926 23:20:37.641060   13040 config.go:182] Loaded profile config "kindnet-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-465jl" [b162885b-9389-48c6-9ce0-9f10f582db33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-465jl" [b162885b-9389-48c6-9ce0-9f10f582db33] Running
E0926 23:20:42.616976   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004379103s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (35.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (35.309702949s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (35.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-708263 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0926 23:20:47.195941   13040 config.go:182] Loaded profile config "custom-flannel-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q6hxs" [c7647ac4-18d1-4fca-9d09-7a83477c5e2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q6hxs" [c7647ac4-18d1-4fca-9d09-7a83477c5e2f] Running
E0926 23:20:52.858772   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003714062s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0926 23:21:13.340683   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (46.845170597s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-708263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m4.485049135s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-708263 "pgrep -a kubelet"
I0926 23:21:21.532232   13040 config.go:182] Loaded profile config "enable-default-cni-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-708263 replace --force -f testdata/netcat-deployment.yaml
I0926 23:21:22.231808   13040 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0926 23:21:22.393727   13040 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kzl8f" [b7c6867b-ecd3-412b-84bb-b824dcb745f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kzl8f" [b7c6867b-ecd3-412b-84bb-b824dcb745f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003272063s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-m6xdk" [3c40a32c-d351-49f1-bd47-1127ddb05a8e] Running
E0926 23:21:54.302493   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:21:54.843813   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/no-preload-703674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:21:57.765522   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/addons-048605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003361078s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-708263 "pgrep -a kubelet"
I0926 23:21:59.189463   13040 config.go:182] Loaded profile config "flannel-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xbfff" [a77c0d1f-bd38-49b8-86de-7aca052448dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0926 23:22:00.555256   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/functional-459506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xbfff" [a77c0d1f-bd38-49b8-86de-7aca052448dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00672273s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-708263 "pgrep -a kubelet"
I0926 23:22:22.834672   13040 config.go:182] Loaded profile config "bridge-708263": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-708263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wpmg8" [e76c87f2-e22c-4b9a-8bc1-fa16aff61c32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wpmg8" [e76c87f2-e22c-4b9a-8bc1-fa16aff61c32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003468873s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-708263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-708263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E0926 23:22:56.287882   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/no-preload-703674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.339780   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.346135   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.357434   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.378722   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.420024   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.501361   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.662895   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:00.984520   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:01.626543   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:02.907882   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:05.470026   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:10.592004   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:16.224854   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/old-k8s-version-011002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:23:20.833436   13040 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/default-k8s-diff-port-682182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (25/331)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-026715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-026715
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-708263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-655811
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-522237
contexts:
- context:
cluster: kubernetes-upgrade-655811
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-655811
name: kubernetes-upgrade-655811
- context:
cluster: missing-upgrade-522237
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-522237
name: missing-upgrade-522237
current-context: kubernetes-upgrade-655811
kind: Config
users:
- name: kubernetes-upgrade-655811
user:
client-certificate: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.crt
client-key: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/kubernetes-upgrade-655811/client.key
- name: missing-upgrade-522237
user:
client-certificate: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.crt
client-key: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-708263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708263"

                                                
                                                
----------------------- debugLogs end: kubenet-708263 [took: 3.146858504s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-708263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-708263
--- SKIP: TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-708263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-708263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-9508/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-522237
contexts:
- context:
cluster: missing-upgrade-522237
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:09 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-522237
name: missing-upgrade-522237
current-context: ""
kind: Config
users:
- name: missing-upgrade-522237
user:
client-certificate: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.crt
client-key: /home/jenkins/minikube-integration/21642-9508/.minikube/profiles/missing-upgrade-522237/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-708263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-708263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708263"

                                                
                                                
----------------------- debugLogs end: cilium-708263 [took: 5.480965809s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-708263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-708263
--- SKIP: TestNetworkPlugins/group/cilium (5.62s)

                                                
                                    
Copied to clipboard