Test Report: Docker_Linux_containerd 22352

                    
                      9a7985111956b2877773a073c576921d0f069a2d:2025-12-28:43023
                    
                

Test fail (6/333)

x
+
TestPause/serial/VerifyStatus (1.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-327044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-327044 --output=json --layout=cluster: exit status 2 (356.34589ms)

                                                
                                                
-- stdout --
	{"Name":"pause-327044","StatusCode":200,"StatusName":"OK","Step":"Done","StepDetail":"* Paused 0 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-327044","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":200,"StatusName":"OK"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 200
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/VerifyStatus]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-327044
helpers_test.go:244: (dbg) docker inspect pause-327044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85",
	        "Created": "2025-12-28T06:57:13.286602091Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 768084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:57:13.317062598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85/hosts",
	        "LogPath": "/var/lib/docker/containers/8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85/8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85-json.log",
	        "Name": "/pause-327044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-327044:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-327044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8dbcc5cd04c1698e2aa465643a77450ebc515606211717822484e514f6ad6e85",
	                "LowerDir": "/var/lib/docker/overlay2/d22d2c8cf5baddd3495c083432fea6f2cd6669acffc58cfdc311523f6dece243-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d22d2c8cf5baddd3495c083432fea6f2cd6669acffc58cfdc311523f6dece243/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d22d2c8cf5baddd3495c083432fea6f2cd6669acffc58cfdc311523f6dece243/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d22d2c8cf5baddd3495c083432fea6f2cd6669acffc58cfdc311523f6dece243/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-327044",
	                "Source": "/var/lib/docker/volumes/pause-327044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-327044",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-327044",
	                "name.minikube.sigs.k8s.io": "pause-327044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ceddb6ac6b22e2194c34fac4e69584dd303650039a08d5bc95a9002fbe1001bc",
	            "SandboxKey": "/var/run/docker/netns/ceddb6ac6b22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-327044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70ac247aad7b5af7fbc54defc2d96e35f5cdf496bd44affc3254df0702f730ec",
	                    "EndpointID": "ae7a7a0dedecddb084c50052ed9d01885b7caff17a5590578b94a0cca1e0ed86",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "0a:5b:04:c7:36:91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-327044",
	                        "8dbcc5cd04c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-327044 -n pause-327044
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-327044 -n pause-327044: exit status 2 (345.873039ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-327044 logs -n 25
helpers_test.go:261: TestPause/serial/VerifyStatus logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cert-options-948332 -- sudo cat /etc/kubernetes/admin.conf                                                                                  │ cert-options-948332       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-options-948332                                                                                                                         │ cert-options-948332       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p running-upgrade-397849 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                 │ running-upgrade-397849    │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p test-preload-517921 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd             │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p running-upgrade-397849                                                                                                                      │ running-upgrade-397849    │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-926675 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ ssh     │ force-systemd-env-455558 ssh cat /etc/containerd/config.toml                                                                                   │ force-systemd-env-455558  │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p force-systemd-env-455558                                                                                                                    │ force-systemd-env-455558  │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p missing-upgrade-317261 --memory=3072 --driver=docker  --container-runtime=containerd                                                        │ missing-upgrade-317261    │ jenkins │ v1.35.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ stop    │ -p kubernetes-upgrade-926675 --alsologtostderr                                                                                                 │ kubernetes-upgrade-926675 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-926675 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ start   │ -p missing-upgrade-317261 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                 │ missing-upgrade-317261    │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ test-preload-517921 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                    │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ stop    │ -p test-preload-517921                                                                                                                         │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p test-preload-517921 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd                       │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p missing-upgrade-317261                                                                                                                      │ missing-upgrade-317261    │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p pause-327044 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd                                │ pause-327044              │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ test-preload-517921 image list                                                                                                                 │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p test-preload-517921                                                                                                                         │ test-preload-517921       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p NoKubernetes-875069 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                            │ NoKubernetes-875069       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ start   │ -p NoKubernetes-875069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                    │ NoKubernetes-875069       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p pause-327044 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                         │ pause-327044              │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p NoKubernetes-875069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                    │ NoKubernetes-875069       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p pause-327044 --alsologtostderr -v=5                                                                                                         │ pause-327044              │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p NoKubernetes-875069                                                                                                                         │ NoKubernetes-875069       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:50.761013  775090 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:50.761321  775090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:50.761332  775090 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:50.761336  775090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:50.761523  775090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:57:50.761959  775090 out.go:368] Setting JSON to false
	I1228 06:57:50.763243  775090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13215,"bootTime":1766891856,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:50.763308  775090 start.go:143] virtualization: kvm guest
	I1228 06:57:50.765858  775090 out.go:179] * [NoKubernetes-875069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:50.767106  775090 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:50.767134  775090 notify.go:221] Checking for updates...
	I1228 06:57:50.769337  775090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:50.770644  775090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:57:50.771741  775090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:57:50.772969  775090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:50.774204  775090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:50.775737  775090 config.go:182] Loaded profile config "NoKubernetes-875069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:57:50.776286  775090 start.go:1905] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1228 06:57:50.776373  775090 start.go:1810] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1228 06:57:50.776420  775090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:50.803120  775090 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:50.803284  775090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:50.870845  775090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-28 06:57:50.859338803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:50.870948  775090 docker.go:319] overlay module found
	I1228 06:57:50.874191  775090 out.go:179] * Using the docker driver based on existing profile
	I1228 06:57:50.875370  775090 start.go:309] selected driver: docker
	I1228 06:57:50.875387  775090 start.go:928] validating driver "docker" against &{Name:NoKubernetes-875069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-875069 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:50.875477  775090 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:50.876051  775090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:50.937018  775090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-28 06:57:50.92713464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:50.937117  775090 start.go:1905] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1228 06:57:50.937324  775090 cni.go:84] Creating CNI manager for ""
	I1228 06:57:50.937411  775090 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:57:50.937437  775090 start.go:1905] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1228 06:57:50.937482  775090 start.go:353] cluster config:
	{Name:NoKubernetes-875069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-875069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:50.939644  775090 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-875069
	I1228 06:57:50.940660  775090 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 06:57:50.941659  775090 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:50.942586  775090 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1228 06:57:50.942618  775090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:50.942753  775090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/NoKubernetes-875069/config.json ...
	I1228 06:57:50.964905  775090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:50.964929  775090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:50.964943  775090 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:50.965006  775090 start.go:360] acquireMachinesLock for NoKubernetes-875069: {Name:mkea1e091d863ed14414015f3fb5b2b4c2c65fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:50.965074  775090 start.go:364] duration metric: took 42.527µs to acquireMachinesLock for "NoKubernetes-875069"
	I1228 06:57:50.965109  775090 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:57:50.965118  775090 fix.go:54] fixHost starting: 
	I1228 06:57:50.965432  775090 cli_runner.go:164] Run: docker container inspect NoKubernetes-875069 --format={{.State.Status}}
	I1228 06:57:50.986131  775090 fix.go:112] recreateIfNeeded on NoKubernetes-875069: state=Running err=<nil>
	W1228 06:57:50.986162  775090 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:57:50.370329  773850 cli_runner.go:164] Run: docker network inspect pause-327044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:50.395150  773850 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:57:50.400564  773850 kubeadm.go:884] updating cluster {Name:pause-327044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-327044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:57:50.400743  773850 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 06:57:50.400814  773850 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:50.433432  773850 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 06:57:50.433458  773850 containerd.go:542] Images already preloaded, skipping extraction
	I1228 06:57:50.433513  773850 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:50.463539  773850 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 06:57:50.463583  773850 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:57:50.463594  773850 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1228 06:57:50.463738  773850 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-327044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-327044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:57:50.463809  773850 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 06:57:50.493319  773850 cni.go:84] Creating CNI manager for ""
	I1228 06:57:50.493344  773850 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:57:50.493365  773850 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:57:50.493395  773850 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-327044 NodeName:pause-327044 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:57:50.493575  773850 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "pause-327044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:57:50.493657  773850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:57:50.501819  773850 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:57:50.501881  773850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:57:50.510614  773850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1228 06:57:50.524052  773850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:57:50.536973  773850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1228 06:57:50.549733  773850 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:57:50.554096  773850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:50.698387  773850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:50.711928  773850 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044 for IP: 192.168.85.2
	I1228 06:57:50.711945  773850 certs.go:195] generating shared ca certs ...
	I1228 06:57:50.711959  773850 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:50.712119  773850 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 06:57:50.712170  773850 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 06:57:50.712182  773850 certs.go:257] generating profile certs ...
	I1228 06:57:50.712324  773850 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/client.key
	I1228 06:57:50.712403  773850 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/apiserver.key.2100f19c
	I1228 06:57:50.712460  773850 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/proxy-client.key
	I1228 06:57:50.712598  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 06:57:50.712643  773850 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 06:57:50.712655  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:57:50.712692  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 06:57:50.712725  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:57:50.712754  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 06:57:50.712829  773850 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 06:57:50.714355  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:57:50.735185  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:57:50.755501  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:57:50.777833  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:57:50.798449  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1228 06:57:50.819374  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:57:50.843989  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:57:50.866292  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:57:50.887390  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 06:57:50.910342  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 06:57:50.931315  773850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:57:50.950225  773850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:57:50.964780  773850 ssh_runner.go:195] Run: openssl version
	I1228 06:57:50.972773  773850 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 06:57:50.982188  773850 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 06:57:50.991052  773850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 06:57:50.995673  773850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 06:57:50.995732  773850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 06:57:51.037862  773850 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:57:51.046579  773850 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 06:57:51.054705  773850 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 06:57:51.062790  773850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 06:57:51.067365  773850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 06:57:51.067425  773850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 06:57:51.103390  773850 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:51.111306  773850 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:51.125809  773850 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:57:51.136455  773850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:51.141202  773850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:51.141269  773850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:51.179568  773850 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:57:51.188165  773850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:57:51.192137  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:57:51.229914  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:57:51.265773  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:57:51.305294  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:57:51.345412  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:57:51.386157  773850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:57:51.421533  773850 kubeadm.go:401] StartCluster: {Name:pause-327044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-327044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fal
se registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:51.421689  773850 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:51.447113  773850 cri.go:83] list returned 14 containers
	I1228 06:57:51.447186  773850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:57:51.455752  773850 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:57:51.455772  773850 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:57:51.455830  773850 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:57:51.464972  773850 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:57:51.466067  773850 kubeconfig.go:125] found "pause-327044" server: "https://192.168.85.2:8443"
	I1228 06:57:51.467593  773850 kapi.go:59] client config for pause-327044: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/client.key", CAFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:57:51.468045  773850 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1228 06:57:51.468061  773850 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1228 06:57:51.468066  773850 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1228 06:57:51.468070  773850 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1228 06:57:51.468076  773850 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1228 06:57:51.468082  773850 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1228 06:57:51.468480  773850 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:57:51.476134  773850 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 06:57:51.476159  773850 kubeadm.go:602] duration metric: took 20.381077ms to restartPrimaryControlPlane
	I1228 06:57:51.476167  773850 kubeadm.go:403] duration metric: took 54.647366ms to StartCluster
	I1228 06:57:51.476181  773850 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:51.476260  773850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:57:51.477466  773850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:51.477676  773850 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 06:57:51.477747  773850 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:51.477982  773850 config.go:182] Loaded profile config "pause-327044": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:57:51.479929  773850 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:51.479933  773850 out.go:179] * Enabled addons: 
	I1228 06:57:50.989261  775090 out.go:252] * Updating the running docker "NoKubernetes-875069" container ...
	I1228 06:57:50.989294  775090 machine.go:94] provisionDockerMachine start ...
	I1228 06:57:50.989366  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.008794  775090 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:51.009076  775090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1228 06:57:51.009091  775090 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:57:51.135536  775090 main.go:144] libmachine: SSH cmd err, output: <nil>: NoKubernetes-875069
	
	I1228 06:57:51.135569  775090 ubuntu.go:182] provisioning hostname "NoKubernetes-875069"
	I1228 06:57:51.135646  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.155972  775090 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:51.156337  775090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1228 06:57:51.156358  775090 main.go:144] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-875069 && echo "NoKubernetes-875069" | sudo tee /etc/hostname
	I1228 06:57:51.291632  775090 main.go:144] libmachine: SSH cmd err, output: <nil>: NoKubernetes-875069
	
	I1228 06:57:51.291746  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.310947  775090 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:51.311206  775090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1228 06:57:51.311246  775090 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-875069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-875069/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-875069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:57:51.439765  775090 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:57:51.439802  775090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 06:57:51.439830  775090 ubuntu.go:190] setting up certificates
	I1228 06:57:51.439854  775090 provision.go:84] configureAuth start
	I1228 06:57:51.439941  775090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-875069
	I1228 06:57:51.460079  775090 provision.go:143] copyHostCerts
	I1228 06:57:51.460120  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 06:57:51.460155  775090 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 06:57:51.460175  775090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 06:57:51.460285  775090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 06:57:51.460422  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 06:57:51.460454  775090 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 06:57:51.460464  775090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 06:57:51.460504  775090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 06:57:51.460595  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 06:57:51.460624  775090 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 06:57:51.460638  775090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 06:57:51.460675  775090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 06:57:51.460774  775090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-875069 san=[127.0.0.1 192.168.76.2 NoKubernetes-875069 localhost minikube]
	I1228 06:57:51.543683  775090 provision.go:177] copyRemoteCerts
	I1228 06:57:51.543749  775090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:57:51.543800  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.562552  775090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/NoKubernetes-875069/id_rsa Username:docker}
	I1228 06:57:51.657538  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 06:57:51.657607  775090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 06:57:51.676982  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 06:57:51.677049  775090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1228 06:57:51.697345  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 06:57:51.697405  775090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:57:51.716961  775090 provision.go:87] duration metric: took 277.082563ms to configureAuth
	I1228 06:57:51.716985  775090 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:57:51.717127  775090 config.go:182] Loaded profile config "NoKubernetes-875069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1228 06:57:51.717145  775090 machine.go:97] duration metric: took 727.838914ms to provisionDockerMachine
	I1228 06:57:51.717159  775090 start.go:293] postStartSetup for "NoKubernetes-875069" (driver="docker")
	I1228 06:57:51.717170  775090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:57:51.717211  775090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:57:51.717291  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.734810  775090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/NoKubernetes-875069/id_rsa Username:docker}
	I1228 06:57:51.827864  775090 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:57:51.832403  775090 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:57:51.832434  775090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:57:51.832448  775090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 06:57:51.832535  775090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 06:57:51.832657  775090 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 06:57:51.832671  775090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> /etc/ssl/certs/5558782.pem
	I1228 06:57:51.832809  775090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:57:51.841194  775090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 06:57:51.860021  775090 start.go:296] duration metric: took 142.846052ms for postStartSetup
	I1228 06:57:51.860125  775090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:51.860179  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.880069  775090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/NoKubernetes-875069/id_rsa Username:docker}
	I1228 06:57:51.973504  775090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:57:51.979353  775090 fix.go:56] duration metric: took 1.014229386s for fixHost
	I1228 06:57:51.979380  775090 start.go:83] releasing machines lock for "NoKubernetes-875069", held for 1.014291987s
	I1228 06:57:51.979468  775090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-875069
	I1228 06:57:51.999065  775090 ssh_runner.go:195] Run: cat /version.json
	I1228 06:57:51.999088  775090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:57:51.999129  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:51.999160  775090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-875069
	I1228 06:57:52.021491  775090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/NoKubernetes-875069/id_rsa Username:docker}
	I1228 06:57:52.022334  775090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/NoKubernetes-875069/id_rsa Username:docker}
	I1228 06:57:52.113709  775090 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:52.172508  775090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:52.187680  775090 out.go:179]   - Kubernetes: Stopping ...
	I1228 06:57:52.188766  775090 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1228 06:57:52.227805  775090 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:52.249294  775090 cri.go:83] list returned 8 containers
	I1228 06:57:52.249361  775090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:52.263733  775090 out.go:179]   - Kubernetes: Stopped
	I1228 06:57:51.481186  773850 addons.go:530] duration metric: took 3.4374ms for enable addons: enabled=[]
	I1228 06:57:51.481204  773850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:51.608700  773850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:51.622084  773850 node_ready.go:35] waiting up to 6m0s for node "pause-327044" to be "Ready" ...
	I1228 06:57:51.629616  773850 node_ready.go:49] node "pause-327044" is "Ready"
	I1228 06:57:51.629644  773850 node_ready.go:38] duration metric: took 7.513874ms for node "pause-327044" to be "Ready" ...
	I1228 06:57:51.629660  773850 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:51.629713  773850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:51.641786  773850 api_server.go:72] duration metric: took 164.075676ms to wait for apiserver process to appear ...
	I1228 06:57:51.641811  773850 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:51.641831  773850 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:51.645743  773850 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:57:51.646676  773850 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:51.646702  773850 api_server.go:131] duration metric: took 4.882272ms to wait for apiserver health ...
	I1228 06:57:51.646712  773850 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:51.649662  773850 system_pods.go:59] 7 kube-system pods found
	I1228 06:57:51.649708  773850 system_pods.go:61] "coredns-7d764666f9-25xk9" [027bff7d-bef6-4d94-9cf5-0feb5f7d4c99] Running
	I1228 06:57:51.649714  773850 system_pods.go:61] "etcd-pause-327044" [a83ddd5b-58e9-4407-8242-451bb7a4f2c8] Running
	I1228 06:57:51.649719  773850 system_pods.go:61] "kindnet-tzx87" [b3862b54-c993-4353-bc0d-57485386eff2] Running
	I1228 06:57:51.649723  773850 system_pods.go:61] "kube-apiserver-pause-327044" [b9bcb09b-f0bf-4554-9a62-5f8279ff3c81] Running
	I1228 06:57:51.649732  773850 system_pods.go:61] "kube-controller-manager-pause-327044" [fdfdd570-5e85-4809-be2d-922dae6b4bb5] Running
	I1228 06:57:51.649741  773850 system_pods.go:61] "kube-proxy-8zhkz" [fda10b86-5089-4bf2-a2c1-b9f38a0784c4] Running
	I1228 06:57:51.649747  773850 system_pods.go:61] "kube-scheduler-pause-327044" [2c1f6f43-16ce-4b3c-a2b4-dae134ed5e0d] Running
	I1228 06:57:51.649757  773850 system_pods.go:74] duration metric: took 3.038094ms to wait for pod list to return data ...
	I1228 06:57:51.649768  773850 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:51.651574  773850 default_sa.go:45] found service account: "default"
	I1228 06:57:51.651596  773850 default_sa.go:55] duration metric: took 1.817222ms for default service account to be created ...
	I1228 06:57:51.651604  773850 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:51.654189  773850 system_pods.go:86] 7 kube-system pods found
	I1228 06:57:51.654229  773850 system_pods.go:89] "coredns-7d764666f9-25xk9" [027bff7d-bef6-4d94-9cf5-0feb5f7d4c99] Running
	I1228 06:57:51.654237  773850 system_pods.go:89] "etcd-pause-327044" [a83ddd5b-58e9-4407-8242-451bb7a4f2c8] Running
	I1228 06:57:51.654243  773850 system_pods.go:89] "kindnet-tzx87" [b3862b54-c993-4353-bc0d-57485386eff2] Running
	I1228 06:57:51.654248  773850 system_pods.go:89] "kube-apiserver-pause-327044" [b9bcb09b-f0bf-4554-9a62-5f8279ff3c81] Running
	I1228 06:57:51.654253  773850 system_pods.go:89] "kube-controller-manager-pause-327044" [fdfdd570-5e85-4809-be2d-922dae6b4bb5] Running
	I1228 06:57:51.654257  773850 system_pods.go:89] "kube-proxy-8zhkz" [fda10b86-5089-4bf2-a2c1-b9f38a0784c4] Running
	I1228 06:57:51.654262  773850 system_pods.go:89] "kube-scheduler-pause-327044" [2c1f6f43-16ce-4b3c-a2b4-dae134ed5e0d] Running
	I1228 06:57:51.654271  773850 system_pods.go:126] duration metric: took 2.660349ms to wait for k8s-apps to be running ...
	I1228 06:57:51.654283  773850 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:51.654328  773850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:51.675079  773850 system_svc.go:56] duration metric: took 20.785175ms WaitForService to wait for kubelet
	I1228 06:57:51.675119  773850 kubeadm.go:587] duration metric: took 197.410283ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:51.675142  773850 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:51.677741  773850 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:51.677770  773850 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:51.677785  773850 node_conditions.go:105] duration metric: took 2.637321ms to run NodePressure ...
	I1228 06:57:51.677799  773850 start.go:242] waiting for startup goroutines ...
	I1228 06:57:51.677813  773850 start.go:247] waiting for cluster config update ...
	I1228 06:57:51.677827  773850 start.go:256] writing updated cluster config ...
	I1228 06:57:51.678163  773850 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:51.682152  773850 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:51.682870  773850 kapi.go:59] client config for pause-327044: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/profiles/pause-327044/client.key", CAFile:"/home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:57:51.685578  773850 pod_ready.go:83] waiting for pod "coredns-7d764666f9-25xk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.690122  773850 pod_ready.go:94] pod "coredns-7d764666f9-25xk9" is "Ready"
	I1228 06:57:51.690147  773850 pod_ready.go:86] duration metric: took 4.546823ms for pod "coredns-7d764666f9-25xk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.692184  773850 pod_ready.go:83] waiting for pod "etcd-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.695656  773850 pod_ready.go:94] pod "etcd-pause-327044" is "Ready"
	I1228 06:57:51.695676  773850 pod_ready.go:86] duration metric: took 3.472564ms for pod "etcd-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.697527  773850 pod_ready.go:83] waiting for pod "kube-apiserver-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.701451  773850 pod_ready.go:94] pod "kube-apiserver-pause-327044" is "Ready"
	I1228 06:57:51.701474  773850 pod_ready.go:86] duration metric: took 3.923749ms for pod "kube-apiserver-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:51.703166  773850 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:52.086412  773850 pod_ready.go:94] pod "kube-controller-manager-pause-327044" is "Ready"
	I1228 06:57:52.086440  773850 pod_ready.go:86] duration metric: took 383.255819ms for pod "kube-controller-manager-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:52.286426  773850 pod_ready.go:83] waiting for pod "kube-proxy-8zhkz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:50.002298  757425 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:57:50.002716  757425 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:57:50.002815  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.026255  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.026287  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.026298  757425 logs.go:284] No container was found matching "kube-apiserver"
	I1228 06:57:50.026358  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.044740  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.044769  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.044778  757425 logs.go:284] No container was found matching "etcd"
	I1228 06:57:50.044837  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.062619  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.062650  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.062659  757425 logs.go:284] No container was found matching "coredns"
	I1228 06:57:50.062702  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.080107  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.080131  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.080138  757425 logs.go:284] No container was found matching "kube-scheduler"
	I1228 06:57:50.080178  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.097146  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.097177  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.097185  757425 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:57:50.097246  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.114299  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.114328  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.114338  757425 logs.go:284] No container was found matching "kube-controller-manager"
	I1228 06:57:50.114391  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.131981  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.132008  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.132017  757425 logs.go:284] No container was found matching "kindnet"
	I1228 06:57:50.132071  757425 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:57:50.149729  757425 cri.go:83] list returned 4 containers
	I1228 06:57:50.149758  757425 logs.go:282] 0 containers: []
	W1228 06:57:50.149767  757425 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:57:50.149778  757425 logs.go:123] Gathering logs for kubelet ...
	I1228 06:57:50.149793  757425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:57:50.242920  757425 logs.go:123] Gathering logs for dmesg ...
	I1228 06:57:50.242956  757425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:57:50.259983  757425 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:57:50.260017  757425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:57:50.325972  757425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:57:50.325990  757425 logs.go:123] Gathering logs for containerd ...
	I1228 06:57:50.326005  757425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1228 06:57:50.379593  757425 logs.go:123] Gathering logs for container status ...
	I1228 06:57:50.379660  757425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:57:52.265084  775090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:57:52.270319  775090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:57:52.270389  775090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:57:52.278712  775090 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:57:52.278735  775090 start.go:496] detecting cgroup driver to use...
	I1228 06:57:52.278768  775090 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:57:52.278815  775090 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 06:57:52.295031  775090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 06:57:52.307713  775090 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:57:52.307764  775090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:57:52.326250  775090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:57:52.338874  775090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:57:52.442186  775090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:57:52.564064  775090 docker.go:234] disabling docker service ...
	I1228 06:57:52.564152  775090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:57:52.583379  775090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:57:52.603996  775090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:57:52.711464  775090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:57:52.815512  775090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:57:52.830211  775090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:57:52.846966  775090 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1228 06:57:52.847044  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1228 06:57:52.856468  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 06:57:52.865349  775090 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 06:57:52.865401  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 06:57:52.874711  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 06:57:52.883645  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 06:57:52.893312  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 06:57:52.902623  775090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:57:52.911201  775090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 06:57:52.920608  775090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:57:52.927976  775090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:57:52.935523  775090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:53.053603  775090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 06:57:53.203953  775090 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 06:57:53.204028  775090 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 06:57:53.209109  775090 start.go:574] Will wait 60s for crictl version
	I1228 06:57:53.209172  775090 ssh_runner.go:195] Run: which crictl
	I1228 06:57:53.213510  775090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:57:53.244557  775090 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 06:57:53.244630  775090 ssh_runner.go:195] Run: containerd --version
	I1228 06:57:53.268432  775090 ssh_runner.go:195] Run: containerd --version
	I1228 06:57:53.296350  775090 out.go:179] * Preparing containerd 2.2.1 ...
	I1228 06:57:53.298007  775090 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:53.303973  775090 out.go:179] * Done! minikube is ready without Kubernetes!
	I1228 06:57:53.305179  775090 out.go:203] ╭──────────────────────────────────────────────────────────╮
	│                                                          │
	│          * Things to try without Kubernetes ...          │
	│                                                          │
	│    - "minikube ssh" to SSH into minikube's node.         │
	│    - "minikube image" to build images without docker.    │
	│                                                          │
	╰──────────────────────────────────────────────────────────╯
	I1228 06:57:52.686816  773850 pod_ready.go:94] pod "kube-proxy-8zhkz" is "Ready"
	I1228 06:57:52.686848  773850 pod_ready.go:86] duration metric: took 400.3966ms for pod "kube-proxy-8zhkz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:52.886851  773850 pod_ready.go:83] waiting for pod "kube-scheduler-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:53.286420  773850 pod_ready.go:94] pod "kube-scheduler-pause-327044" is "Ready"
	I1228 06:57:53.286450  773850 pod_ready.go:86] duration metric: took 399.565635ms for pod "kube-scheduler-pause-327044" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:53.286466  773850 pod_ready.go:40] duration metric: took 1.604265836s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:53.342018  773850 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:53.344256  773850 out.go:179] * Done! kubectl is now configured to use "pause-327044" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0c035631e5151       aa5e3ebc0dfed       9 seconds ago       Running             coredns                   0                   55cc563e7b7d2       coredns-7d764666f9-25xk9               kube-system
	50672f9c32ffa       4921d7a6dffa9       20 seconds ago      Running             kindnet-cni               0                   c3693a1a8e48d       kindnet-tzx87                          kube-system
	9155fcef604b3       32652ff1bbe6b       24 seconds ago      Running             kube-proxy                0                   0f2c0a3b1a444       kube-proxy-8zhkz                       kube-system
	2b16ec6eaf39f       2c9a4b058bd7e       34 seconds ago      Running             kube-controller-manager   0                   0eeee28d3644b       kube-controller-manager-pause-327044   kube-system
	0c69c386edc79       550794e3b12ac       35 seconds ago      Running             kube-scheduler            0                   b4486600e9858       kube-scheduler-pause-327044            kube-system
	96f2e58cff4e8       5c6acd67e9cd1       35 seconds ago      Running             kube-apiserver            0                   5c580983540c8       kube-apiserver-pause-327044            kube-system
	ce2eef6d0943c       0a108f7189562       35 seconds ago      Running             etcd                      0                   d8290fea6fe0d       etcd-pause-327044                      kube-system
	
	
	==> containerd <==
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204181271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204203973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204268708Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204319313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204342828Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204354788Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204367861Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204382972Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204396125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204414757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.204452883Z" level=info msg="Connect containerd service"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.205282874Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.217033351Z" level=info msg="Start subscribing containerd event"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.217088040Z" level=info msg="Start recovering state"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.217321665Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.217397773Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255182770Z" level=info msg="Start event monitor"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255241289Z" level=info msg="Start cni network conf syncer for default"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255254265Z" level=info msg="Start streaming server"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255263999Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255273456Z" level=info msg="runtime interface starting up..."
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255280764Z" level=info msg="starting plugins..."
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.255294866Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 06:57:50 pause-327044 containerd[2394]: time="2025-12-28T06:57:50.264647762Z" level=info msg="containerd successfully booted in 0.110355s"
	Dec 28 06:57:50 pause-327044 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	Name:               pause-327044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-327044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=pause-327044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_57_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-327044
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:45 +0000   Sun, 28 Dec 2025 06:57:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:45 +0000   Sun, 28 Dec 2025 06:57:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:45 +0000   Sun, 28 Dec 2025 06:57:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:57:45 +0000   Sun, 28 Dec 2025 06:57:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-327044
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                ce751e5d-a580-4467-a1ff-dc48ffb99606
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-25xk9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-327044                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-tzx87                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-327044             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-327044    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-8zhkz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-327044             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-327044 event: Registered Node pause-327044 in Controller
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 06:57:55 up  3:40,  0 user,  load average: 3.84, 3.05, 14.94
	Linux pause-327044 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:29 pause-327044 kubelet[1421]: I1228 06:57:29.745311    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3862b54-c993-4353-bc0d-57485386eff2-lib-modules\") pod \"kindnet-tzx87\" (UID: \"b3862b54-c993-4353-bc0d-57485386eff2\") " pod="kube-system/kindnet-tzx87"
	Dec 28 06:57:29 pause-327044 kubelet[1421]: I1228 06:57:29.745354    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zl2\" (UniqueName: \"kubernetes.io/projected/b3862b54-c993-4353-bc0d-57485386eff2-kube-api-access-z7zl2\") pod \"kindnet-tzx87\" (UID: \"b3862b54-c993-4353-bc0d-57485386eff2\") " pod="kube-system/kindnet-tzx87"
	Dec 28 06:57:29 pause-327044 kubelet[1421]: I1228 06:57:29.745376    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fda10b86-5089-4bf2-a2c1-b9f38a0784c4-kube-proxy\") pod \"kube-proxy-8zhkz\" (UID: \"fda10b86-5089-4bf2-a2c1-b9f38a0784c4\") " pod="kube-system/kube-proxy-8zhkz"
	Dec 28 06:57:29 pause-327044 kubelet[1421]: I1228 06:57:29.745413    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda10b86-5089-4bf2-a2c1-b9f38a0784c4-lib-modules\") pod \"kube-proxy-8zhkz\" (UID: \"fda10b86-5089-4bf2-a2c1-b9f38a0784c4\") " pod="kube-system/kube-proxy-8zhkz"
	Dec 28 06:57:29 pause-327044 kubelet[1421]: I1228 06:57:29.745432    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjbk\" (UniqueName: \"kubernetes.io/projected/fda10b86-5089-4bf2-a2c1-b9f38a0784c4-kube-api-access-hxjbk\") pod \"kube-proxy-8zhkz\" (UID: \"fda10b86-5089-4bf2-a2c1-b9f38a0784c4\") " pod="kube-system/kube-proxy-8zhkz"
	Dec 28 06:57:29 pause-327044 kubelet[1421]: E1228 06:57:29.856610    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-327044" containerName="kube-controller-manager"
	Dec 28 06:57:30 pause-327044 kubelet[1421]: I1228 06:57:30.858877    1421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8zhkz" podStartSLOduration=1.858859847 podStartE2EDuration="1.858859847s" podCreationTimestamp="2025-12-28 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:30.858847647 +0000 UTC m=+7.125532242" watchObservedRunningTime="2025-12-28 06:57:30.858859847 +0000 UTC m=+7.125544424"
	Dec 28 06:57:32 pause-327044 kubelet[1421]: E1228 06:57:32.818519    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-327044" containerName="etcd"
	Dec 28 06:57:33 pause-327044 kubelet[1421]: E1228 06:57:33.942288    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-327044" containerName="kube-scheduler"
	Dec 28 06:57:34 pause-327044 kubelet[1421]: I1228 06:57:34.867728    1421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-tzx87" podStartSLOduration=1.997574582 podStartE2EDuration="5.867708541s" podCreationTimestamp="2025-12-28 06:57:29 +0000 UTC" firstStartedPulling="2025-12-28 06:57:30.385670034 +0000 UTC m=+6.652354591" lastFinishedPulling="2025-12-28 06:57:34.255803983 +0000 UTC m=+10.522488550" observedRunningTime="2025-12-28 06:57:34.867325361 +0000 UTC m=+11.134009955" watchObservedRunningTime="2025-12-28 06:57:34.867708541 +0000 UTC m=+11.134393116"
	Dec 28 06:57:35 pause-327044 kubelet[1421]: E1228 06:57:35.386534    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-327044" containerName="kube-apiserver"
	Dec 28 06:57:39 pause-327044 kubelet[1421]: E1228 06:57:39.862355    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-327044" containerName="kube-controller-manager"
	Dec 28 06:57:42 pause-327044 kubelet[1421]: E1228 06:57:42.820104    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-327044" containerName="etcd"
	Dec 28 06:57:43 pause-327044 kubelet[1421]: E1228 06:57:43.946481    1421 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-327044" containerName="kube-scheduler"
	Dec 28 06:57:45 pause-327044 kubelet[1421]: I1228 06:57:45.126097    1421 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 28 06:57:45 pause-327044 kubelet[1421]: I1228 06:57:45.256444    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qpd\" (UniqueName: \"kubernetes.io/projected/027bff7d-bef6-4d94-9cf5-0feb5f7d4c99-kube-api-access-p4qpd\") pod \"coredns-7d764666f9-25xk9\" (UID: \"027bff7d-bef6-4d94-9cf5-0feb5f7d4c99\") " pod="kube-system/coredns-7d764666f9-25xk9"
	Dec 28 06:57:45 pause-327044 kubelet[1421]: I1228 06:57:45.256514    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/027bff7d-bef6-4d94-9cf5-0feb5f7d4c99-config-volume\") pod \"coredns-7d764666f9-25xk9\" (UID: \"027bff7d-bef6-4d94-9cf5-0feb5f7d4c99\") " pod="kube-system/coredns-7d764666f9-25xk9"
	Dec 28 06:57:45 pause-327044 kubelet[1421]: E1228 06:57:45.882278    1421 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-25xk9" containerName="coredns"
	Dec 28 06:57:45 pause-327044 kubelet[1421]: I1228 06:57:45.896141    1421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-25xk9" podStartSLOduration=16.896119898 podStartE2EDuration="16.896119898s" podCreationTimestamp="2025-12-28 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:45.895609159 +0000 UTC m=+22.162293734" watchObservedRunningTime="2025-12-28 06:57:45.896119898 +0000 UTC m=+22.162804473"
	Dec 28 06:57:46 pause-327044 kubelet[1421]: E1228 06:57:46.884445    1421 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-25xk9" containerName="coredns"
	Dec 28 06:57:47 pause-327044 kubelet[1421]: E1228 06:57:47.886869    1421 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-25xk9" containerName="coredns"
	Dec 28 06:57:53 pause-327044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:53 pause-327044 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:53 pause-327044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:53 pause-327044 systemd[1]: kubelet.service: Consumed 1.431s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-327044 -n pause-327044
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-327044 -n pause-327044: exit status 2 (334.397526ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-327044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/VerifyStatus FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/VerifyStatus (1.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-456925 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456925 -n no-preload-456925
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456925 -n no-preload-456925: exit status 2 (339.452243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456925 -n no-preload-456925
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456925 -n no-preload-456925: exit status 2 (345.425718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-456925 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456925 -n no-preload-456925
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456925 -n no-preload-456925
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-456925
helpers_test.go:244: (dbg) docker inspect no-preload-456925:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201",
	        "Created": "2025-12-28T07:02:21.77133894Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 873680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:32.562892545Z",
	            "FinishedAt": "2025-12-28T07:03:31.640759351Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/hosts",
	        "LogPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201-json.log",
	        "Name": "/no-preload-456925",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-456925:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-456925",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201",
	                "LowerDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-456925",
	                "Source": "/var/lib/docker/volumes/no-preload-456925/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-456925",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-456925",
	                "name.minikube.sigs.k8s.io": "no-preload-456925",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "268df8e5abee5b3cd47704904ff04b12dc84b734e029a5472bec4361f905066c",
	            "SandboxKey": "/var/run/docker/netns/268df8e5abee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-456925": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01bcffc32464b95dc919335f97af3e70ebdcd82b5f44169bb87577b18116c422",
	                    "EndpointID": "99f009e50c09cb15e7f38cbc1c01a61e1bd5cc560d32180202d4fd772c7f227a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:66:f5:70:a0:98",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-456925",
	                        "8cefd7db6fd3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456925 -n no-preload-456925
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-456925 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p stopped-upgrade-153407                                                                                                                                                                                                                           │ stopped-upgrade-153407       │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ delete  │ -p disable-driver-mounts-284795                                                                                                                                                                                                                     │ disable-driver-mounts-284795 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-456925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p no-preload-456925 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-805353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p old-k8s-version-805353 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p no-preload-456925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-805353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-982151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p embed-certs-982151 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:03:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:03:55.738532  882252 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:03:55.738689  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.738703  882252 out.go:374] Setting ErrFile to fd 2...
	I1228 07:03:55.738721  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.739259  882252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:03:55.740037  882252 out.go:368] Setting JSON to false
	I1228 07:03:55.742377  882252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13580,"bootTime":1766891856,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:03:55.742479  882252 start.go:143] virtualization: kvm guest
	I1228 07:03:55.744573  882252 out.go:179] * [default-k8s-diff-port-129908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:03:55.745969  882252 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:03:55.746036  882252 notify.go:221] Checking for updates...
	I1228 07:03:55.749018  882252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:03:55.750368  882252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:03:55.751423  882252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:03:55.752505  882252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:03:55.753752  882252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:03:55.755380  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:03:55.756046  882252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:03:55.782846  882252 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:03:55.782996  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.852117  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.840745048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.852300  882252 docker.go:319] overlay module found
	I1228 07:03:55.853848  882252 out.go:179] * Using the docker driver based on existing profile
	I1228 07:03:55.855042  882252 start.go:309] selected driver: docker
	I1228 07:03:55.855063  882252 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.855165  882252 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:03:55.855840  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.917473  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.906550203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.917793  882252 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:55.917933  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:03:55.918027  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:03:55.918113  882252 start.go:353] cluster config:
	{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.919833  882252 out.go:179] * Starting "default-k8s-diff-port-129908" primary control-plane node in "default-k8s-diff-port-129908" cluster
	I1228 07:03:55.920969  882252 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:03:55.922122  882252 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:03:55.923232  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:03:55.923274  882252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:03:55.923284  882252 cache.go:65] Caching tarball of preloaded images
	I1228 07:03:55.923341  882252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:03:55.923383  882252 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:03:55.923396  882252 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:03:55.923509  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:55.945420  882252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:03:55.945450  882252 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:03:55.945480  882252 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:03:55.945524  882252 start.go:360] acquireMachinesLock for default-k8s-diff-port-129908: {Name:mk66a28d31a5a7f03f0abd1dfec44af622c036e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:03:55.945595  882252 start.go:364] duration metric: took 45.236µs to acquireMachinesLock for "default-k8s-diff-port-129908"
	I1228 07:03:55.945619  882252 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:03:55.945629  882252 fix.go:54] fixHost starting: 
	I1228 07:03:55.945869  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:55.966941  882252 fix.go:112] recreateIfNeeded on default-k8s-diff-port-129908: state=Stopped err=<nil>
	W1228 07:03:55.966987  882252 fix.go:138] unexpected machine state, will restart: <nil>
	W1228 07:03:53.203811  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:03:55.206598  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:03:54.958080  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:03:54.958206  880223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:03:54.958289  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.961176  880223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:54.961312  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:03:54.961373  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.988639  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:54.999061  880223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:54.999103  880223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:03:54.999305  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:55.003431  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.007935  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.029653  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.081852  880223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:03:55.098405  880223 node_ready.go:35] waiting up to 6m0s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:55.112422  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:03:55.112450  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:03:55.121850  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:55.121890  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:03:55.121900  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:03:55.136569  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:03:55.136606  880223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:03:55.143729  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.147425  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:03:55.147451  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:03:55.172581  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.172615  880223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:03:55.176850  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:03:55.176888  880223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:03:55.210186  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:03:55.210579  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:03:55.215104  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.244553  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:03:55.244590  880223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:03:55.261806  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.261878  880223 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1228 07:03:55.261947  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.294458  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:03:55.294486  880223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:03:55.326344  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:03:55.326372  880223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:03:55.346621  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:03:55.346647  880223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:03:55.375997  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.376020  880223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:03:55.391494  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.441974  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.603856  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:57.038487  880223 node_ready.go:49] node "embed-certs-982151" is "Ready"
	I1228 07:03:57.038523  880223 node_ready.go:38] duration metric: took 1.940079394s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:57.038543  880223 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:03:57.038606  880223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:03:57.704736  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489588386s)
	I1228 07:03:57.704780  880223 addons.go:495] Verifying addon metrics-server=true in "embed-certs-982151"
	I1228 07:03:57.704836  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.313298327s)
	I1228 07:03:57.705109  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.263104063s)
	I1228 07:03:57.707172  880223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-982151 addons enable metrics-server
	
	I1228 07:03:57.716966  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11307566s)
	I1228 07:03:57.716984  880223 api_server.go:72] duration metric: took 2.803527403s to wait for apiserver process to appear ...
	I1228 07:03:57.717001  880223 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:03:57.717019  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:57.719801  880223 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1228 07:03:57.721544  880223 addons.go:530] duration metric: took 2.808027569s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1228 07:03:57.721710  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:57.721773  880223 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:53.930573  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:03:56.432760  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:03:55.969541  882252 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-129908" ...
	I1228 07:03:55.969643  882252 cli_runner.go:164] Run: docker start default-k8s-diff-port-129908
	I1228 07:03:56.285009  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:56.317632  882252 kic.go:430] container "default-k8s-diff-port-129908" state is running.
	I1228 07:03:56.318755  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:56.344384  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:56.344656  882252 machine.go:94] provisionDockerMachine start ...
	I1228 07:03:56.344759  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:56.367745  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:56.368021  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:56.368034  882252 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:03:56.368796  882252 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41308->127.0.0.1:33133: read: connection reset by peer
	I1228 07:03:59.512247  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.512276  882252 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-129908"
	I1228 07:03:59.512350  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.534401  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.534744  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.534766  882252 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-129908 && echo "default-k8s-diff-port-129908" | sudo tee /etc/hostname
	I1228 07:03:59.684180  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.684288  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.705307  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.705585  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.705613  882252 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-129908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-129908/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-129908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:03:59.844260  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:03:59.844297  882252 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:03:59.844323  882252 ubuntu.go:190] setting up certificates
	I1228 07:03:59.844346  882252 provision.go:84] configureAuth start
	I1228 07:03:59.844416  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:59.865173  882252 provision.go:143] copyHostCerts
	I1228 07:03:59.865247  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:03:59.865261  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:03:59.865342  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:03:59.865484  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:03:59.865498  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:03:59.865539  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:03:59.865612  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:03:59.865623  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:03:59.865658  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:03:59.865731  882252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-129908 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-129908 localhost minikube]
	I1228 07:03:59.897890  882252 provision.go:177] copyRemoteCerts
	I1228 07:03:59.897972  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:03:59.898024  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.918735  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.019603  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:04:00.042819  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:04:00.064302  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:04:00.085628  882252 provision.go:87] duration metric: took 241.249279ms to configureAuth
	I1228 07:04:00.085661  882252 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:04:00.085909  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:00.085931  882252 machine.go:97] duration metric: took 3.741255863s to provisionDockerMachine
	I1228 07:04:00.085943  882252 start.go:293] postStartSetup for "default-k8s-diff-port-129908" (driver="docker")
	I1228 07:04:00.085955  882252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:04:00.086021  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:04:00.086092  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.107294  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.213532  882252 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:04:00.217985  882252 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:04:00.218016  882252 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:04:00.218030  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:04:00.218175  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:04:00.218332  882252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:04:00.218449  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:04:00.227795  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:00.251291  882252 start.go:296] duration metric: took 165.331058ms for postStartSetup
	I1228 07:04:00.251386  882252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:04:00.251464  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.276621  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.373799  882252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:04:00.379539  882252 fix.go:56] duration metric: took 4.433903055s for fixHost
	I1228 07:04:00.379578  882252 start.go:83] releasing machines lock for "default-k8s-diff-port-129908", held for 4.433956892s
	I1228 07:04:00.379650  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:04:00.401020  882252 ssh_runner.go:195] Run: cat /version.json
	I1228 07:04:00.401076  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.401098  882252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:04:00.401197  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.423791  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.424146  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.518927  882252 ssh_runner.go:195] Run: systemctl --version
	I1228 07:04:00.588110  882252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:04:00.594268  882252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:04:00.594347  882252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:04:00.604690  882252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:04:00.604711  882252 start.go:496] detecting cgroup driver to use...
	I1228 07:04:00.604747  882252 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:04:00.604794  882252 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:04:00.626916  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:04:00.642927  882252 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:04:00.643006  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:04:00.661151  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:04:00.677071  882252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:04:00.782881  882252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:04:00.886938  882252 docker.go:234] disabling docker service ...
	I1228 07:04:00.887034  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:04:00.905648  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:04:00.922255  882252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:04:01.032767  882252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:04:01.151567  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:04:01.167546  882252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:04:01.184683  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:04:01.195723  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:04:01.206605  882252 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:04:01.206685  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:04:01.217347  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.227406  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:04:01.238026  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.252602  882252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:04:01.262955  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:04:01.274767  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:04:01.285746  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:04:01.296288  882252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:04:01.305400  882252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:04:01.314805  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.409989  882252 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:04:01.571109  882252 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:04:01.571191  882252 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:04:01.575916  882252 start.go:574] Will wait 60s for crictl version
	I1228 07:04:01.575986  882252 ssh_runner.go:195] Run: which crictl
	I1228 07:04:01.580123  882252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:04:01.609855  882252 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:04:01.609941  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.636563  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.664300  882252 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	W1228 07:03:57.704861  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:00.204893  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:01.666266  882252 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-129908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:04:01.685605  882252 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:04:01.690424  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.702737  882252 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:01.702926  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:01.702997  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.733786  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.733821  882252 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:01.733892  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.763427  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.763453  882252 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:01.763463  882252 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:04:01.763630  882252 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-129908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:01.763699  882252 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:01.793899  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:04:01.793927  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:01.793950  882252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:04:01.793979  882252 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-129908 NodeName:default-k8s-diff-port-129908 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:01.794135  882252 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-129908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:01.794234  882252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:01.803826  882252 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:01.803923  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:01.814997  882252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:04:01.830520  882252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:01.847094  882252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1228 07:04:01.862575  882252 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:01.867240  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.879762  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.981753  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:02.008768  882252 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908 for IP: 192.168.85.2
	I1228 07:04:02.008791  882252 certs.go:195] generating shared ca certs ...
	I1228 07:04:02.008811  882252 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.008980  882252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:02.009054  882252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:02.009079  882252 certs.go:257] generating profile certs ...
	I1228 07:04:02.009241  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/client.key
	I1228 07:04:02.009336  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key.e6891321
	I1228 07:04:02.009417  882252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key
	I1228 07:04:02.009566  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:02.009614  882252 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:02.009629  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:02.009669  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:02.009721  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:02.009751  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:02.009804  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:02.010516  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:02.030969  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:02.050983  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:02.071754  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:02.095835  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:04:02.119207  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:04:02.138541  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:02.156606  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:04:02.175252  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:02.193782  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:02.213892  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:02.232418  882252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:02.246379  882252 ssh_runner.go:195] Run: openssl version
	I1228 07:04:02.253696  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.261337  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:02.268782  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272460  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272502  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.309757  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:02.318273  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.327328  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:02.336776  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341018  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341065  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.375936  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:02.383700  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.391577  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:02.399167  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.402932  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.403000  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.441765  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:02.450114  882252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:02.454467  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:04:02.490848  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:04:02.528538  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:04:02.574210  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:04:02.634863  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:04:02.688071  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:04:02.737549  882252 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:02.737745  882252 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:04:02.763979  882252 cri.go:83] list returned 8 containers
	I1228 07:04:02.764053  882252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:02.776746  882252 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:04:02.776774  882252 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:04:02.776824  882252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:04:02.788129  882252 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:04:02.789314  882252 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-129908" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.789957  882252 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-129908" cluster setting kubeconfig missing "default-k8s-diff-port-129908" context setting]
	I1228 07:04:02.793405  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.796509  882252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:04:02.810377  882252 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:04:02.810427  882252 kubeadm.go:602] duration metric: took 33.643458ms to restartPrimaryControlPlane
	I1228 07:04:02.810439  882252 kubeadm.go:403] duration metric: took 72.900379ms to StartCluster
	I1228 07:04:02.810463  882252 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.810543  882252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.814033  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.814425  882252 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:04:02.814668  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:02.814737  882252 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:04:02.814823  882252 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.814837  882252 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.814844  882252 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:04:02.814873  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815322  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.815673  882252 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815706  882252 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-129908"
	I1228 07:04:02.815709  882252 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815744  882252 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.815758  882252 addons.go:248] addon dashboard should already be in state true
	I1228 07:04:02.815802  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815974  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816174  882252 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.816207  882252 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.816231  882252 addons.go:248] addon metrics-server should already be in state true
	I1228 07:04:02.816264  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.816395  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816719  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.819436  882252 out.go:179] * Verifying Kubernetes components...
	I1228 07:04:02.823186  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:02.861166  882252 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:04:02.861203  882252 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:04:02.862499  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:04:02.862519  882252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:04:02.862547  882252 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:02.862563  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:04:02.862596  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.862620  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.863967  882252 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.863988  882252 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:04:02.864027  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.864484  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.872650  882252 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:04:02.877275  882252 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:03:58.217522  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:58.222483  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 07:03:58.223414  880223 api_server.go:141] control plane version: v1.35.0
	I1228 07:03:58.223442  880223 api_server.go:131] duration metric: took 506.434422ms to wait for apiserver health ...
	I1228 07:03:58.223451  880223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:03:58.227321  880223 system_pods.go:59] 9 kube-system pods found
	I1228 07:03:58.227348  880223 system_pods.go:61] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.227355  880223 system_pods.go:61] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.227362  880223 system_pods.go:61] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.227377  880223 system_pods.go:61] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.227387  880223 system_pods.go:61] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.227393  880223 system_pods.go:61] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.227400  880223 system_pods.go:61] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.227407  880223 system_pods.go:61] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.227416  880223 system_pods.go:61] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.227424  880223 system_pods.go:74] duration metric: took 3.965842ms to wait for pod list to return data ...
	I1228 07:03:58.227433  880223 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:03:58.229720  880223 default_sa.go:45] found service account: "default"
	I1228 07:03:58.229740  880223 default_sa.go:55] duration metric: took 2.300807ms for default service account to be created ...
	I1228 07:03:58.229747  880223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:03:58.299736  880223 system_pods.go:86] 9 kube-system pods found
	I1228 07:03:58.299772  880223 system_pods.go:89] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.299780  880223 system_pods.go:89] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.299787  880223 system_pods.go:89] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.299793  880223 system_pods.go:89] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.299798  880223 system_pods.go:89] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.299804  880223 system_pods.go:89] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.299809  880223 system_pods.go:89] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.299816  880223 system_pods.go:89] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.299823  880223 system_pods.go:89] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.299833  880223 system_pods.go:126] duration metric: took 70.080198ms to wait for k8s-apps to be running ...
	I1228 07:03:58.299847  880223 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:03:58.299903  880223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:03:58.316616  880223 system_svc.go:56] duration metric: took 16.755637ms WaitForService to wait for kubelet
	I1228 07:03:58.316644  880223 kubeadm.go:587] duration metric: took 3.40319134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:58.316662  880223 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:03:58.319428  880223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:03:58.319454  880223 node_conditions.go:123] node cpu capacity is 8
	I1228 07:03:58.319469  880223 node_conditions.go:105] duration metric: took 2.802451ms to run NodePressure ...
	I1228 07:03:58.319480  880223 start.go:242] waiting for startup goroutines ...
	I1228 07:03:58.319487  880223 start.go:247] waiting for cluster config update ...
	I1228 07:03:58.319498  880223 start.go:256] writing updated cluster config ...
	I1228 07:03:58.319774  880223 ssh_runner.go:195] Run: rm -f paused
	I1228 07:03:58.324556  880223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:03:58.327768  880223 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s8grm" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:00.333468  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:02.334146  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:03:58.931292  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:00.931470  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:02.938234  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:02.878697  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:04:02.878726  882252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:04:02.878799  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.894522  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.906490  882252 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:02.906522  882252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:04:02.906593  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.906643  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.913956  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.933719  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:03.007313  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:03.020758  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:04:03.020783  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:04:03.025468  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:03.025821  882252 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:03.029952  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:04:03.029974  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:04:03.039959  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:04:03.039983  882252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:04:03.048956  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:04:03.048979  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:04:03.049792  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:03.059613  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.059634  882252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:04:03.069470  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:04:03.069493  882252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:04:03.079512  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.090099  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:04:03.090125  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:04:03.109191  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:04:03.109228  882252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:04:03.127327  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:04:03.127354  882252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:04:03.145332  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:04:03.145362  882252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:04:03.161020  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:04:03.161041  882252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:04:03.175283  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:03.175302  882252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:04:03.190565  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:04.366811  882252 node_ready.go:49] node "default-k8s-diff-port-129908" is "Ready"
	I1228 07:04:04.366854  882252 node_ready.go:38] duration metric: took 1.340986184s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:04.366876  882252 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:04:04.366953  882252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:04:05.079411  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.05389931s)
	I1228 07:04:05.079504  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029690998s)
	I1228 07:04:05.079781  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.000232104s)
	I1228 07:04:05.079814  882252 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-129908"
	I1228 07:04:05.079947  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889324162s)
	I1228 07:04:05.080255  882252 api_server.go:72] duration metric: took 2.265784615s to wait for apiserver process to appear ...
	I1228 07:04:05.080277  882252 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:04:05.080339  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.082924  882252 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-129908 addons enable metrics-server
	
	I1228 07:04:05.086253  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:04:05.086281  882252 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:04:05.088197  882252 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1228 07:04:05.089519  882252 addons.go:530] duration metric: took 2.27478482s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1228 07:04:05.581379  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.586973  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1228 07:04:05.588678  882252 api_server.go:141] control plane version: v1.35.0
	I1228 07:04:05.588713  882252 api_server.go:131] duration metric: took 508.427311ms to wait for apiserver health ...
	I1228 07:04:05.588726  882252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:04:05.592639  882252 system_pods.go:59] 9 kube-system pods found
	I1228 07:04:05.592689  882252 system_pods.go:61] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.592702  882252 system_pods.go:61] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.592722  882252 system_pods.go:61] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.592730  882252 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.592740  882252 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.592750  882252 system_pods.go:61] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.592758  882252 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.592765  882252 system_pods.go:61] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.592772  882252 system_pods.go:61] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.592784  882252 system_pods.go:74] duration metric: took 4.051269ms to wait for pod list to return data ...
	I1228 07:04:05.592793  882252 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:04:05.595925  882252 default_sa.go:45] found service account: "default"
	I1228 07:04:05.595948  882252 default_sa.go:55] duration metric: took 3.147858ms for default service account to be created ...
	I1228 07:04:05.595959  882252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:04:05.601261  882252 system_pods.go:86] 9 kube-system pods found
	I1228 07:04:05.601357  882252 system_pods.go:89] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.601384  882252 system_pods.go:89] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.601428  882252 system_pods.go:89] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.601469  882252 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.601511  882252 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.601549  882252 system_pods.go:89] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.601594  882252 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.601633  882252 system_pods.go:89] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.601672  882252 system_pods.go:89] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.601685  882252 system_pods.go:126] duration metric: took 5.718923ms to wait for k8s-apps to be running ...
	I1228 07:04:05.601696  882252 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:04:05.601792  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:04:05.633925  882252 system_svc.go:56] duration metric: took 32.2184ms WaitForService to wait for kubelet
	I1228 07:04:05.633962  882252 kubeadm.go:587] duration metric: took 2.819493554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:04:05.633987  882252 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:04:05.639517  882252 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:04:05.639550  882252 node_conditions.go:123] node cpu capacity is 8
	I1228 07:04:05.639569  882252 node_conditions.go:105] duration metric: took 5.575875ms to run NodePressure ...
	I1228 07:04:05.639586  882252 start.go:242] waiting for startup goroutines ...
	I1228 07:04:05.639597  882252 start.go:247] waiting for cluster config update ...
	I1228 07:04:05.639614  882252 start.go:256] writing updated cluster config ...
	I1228 07:04:05.639915  882252 ssh_runner.go:195] Run: rm -f paused
	I1228 07:04:05.647014  882252 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:05.659239  882252 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mbfzh" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:02.704180  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.704962  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:07.203308  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.335906  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:06.878878  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:05.434776  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.931491  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.670978  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:10.165524  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:09.204178  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:11.703509  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:09.333196  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:11.333675  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:10.430946  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.431254  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.166364  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.665182  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.203171  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:16.203563  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:13.334174  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:15.833412  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:17.833543  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:14.931067  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:17.431207  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:16.665304  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:19.164406  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:18.204086  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:20.703420  873440 pod_ready.go:94] pod "coredns-7d764666f9-9n78x" is "Ready"
	I1228 07:04:20.703450  873440 pod_ready.go:86] duration metric: took 38.005418075s for pod "coredns-7d764666f9-9n78x" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.705734  873440 pod_ready.go:83] waiting for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.709107  873440 pod_ready.go:94] pod "etcd-no-preload-456925" is "Ready"
	I1228 07:04:20.709130  873440 pod_ready.go:86] duration metric: took 3.373198ms for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.711055  873440 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.714256  873440 pod_ready.go:94] pod "kube-apiserver-no-preload-456925" is "Ready"
	I1228 07:04:20.714278  873440 pod_ready.go:86] duration metric: took 3.20057ms for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.715898  873440 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.901759  873440 pod_ready.go:94] pod "kube-controller-manager-no-preload-456925" is "Ready"
	I1228 07:04:20.901785  873440 pod_ready.go:86] duration metric: took 185.864424ms for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.101880  873440 pod_ready.go:83] waiting for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.501912  873440 pod_ready.go:94] pod "kube-proxy-mn4cz" is "Ready"
	I1228 07:04:21.501939  873440 pod_ready.go:86] duration metric: took 400.033432ms for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.701807  873440 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102084  873440 pod_ready.go:94] pod "kube-scheduler-no-preload-456925" is "Ready"
	I1228 07:04:22.102117  873440 pod_ready.go:86] duration metric: took 400.282919ms for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102133  873440 pod_ready.go:40] duration metric: took 39.409345661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:22.149656  873440 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:04:22.151334  873440 out.go:179] * Done! kubectl is now configured to use "no-preload-456925" cluster and "default" namespace by default
	W1228 07:04:20.333502  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:22.336156  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:19.930823  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:21.932881  874073 pod_ready.go:94] pod "coredns-5dd5756b68-kcdsc" is "Ready"
	I1228 07:04:21.932918  874073 pod_ready.go:86] duration metric: took 37.007863966s for pod "coredns-5dd5756b68-kcdsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.935882  874073 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.939908  874073 pod_ready.go:94] pod "etcd-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.939936  874073 pod_ready.go:86] duration metric: took 4.02365ms for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.942466  874073 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.946100  874073 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.946121  874073 pod_ready.go:86] duration metric: took 3.628428ms for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.948541  874073 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.129399  874073 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-805353" is "Ready"
	I1228 07:04:22.129426  874073 pod_ready.go:86] duration metric: took 180.865961ms for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.330689  874073 pod_ready.go:83] waiting for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.729871  874073 pod_ready.go:94] pod "kube-proxy-sd5kh" is "Ready"
	I1228 07:04:22.729898  874073 pod_ready.go:86] duration metric: took 399.179627ms for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.929709  874073 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329353  874073 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-805353" is "Ready"
	I1228 07:04:23.329386  874073 pod_ready.go:86] duration metric: took 399.644333ms for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329402  874073 pod_ready.go:40] duration metric: took 38.409544453s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:23.376490  874073 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 07:04:23.378140  874073 out.go:203] 
	W1228 07:04:23.379347  874073 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:04:23.380351  874073 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:04:23.381580  874073 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-805353" cluster and "default" namespace by default
	W1228 07:04:21.665046  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:23.666729  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:24.833321  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.834192  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.164189  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:28.164450  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:30.164565  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:29.334446  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:31.833411  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:32.664464  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:34.665842  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9123368677433       6e38f40d628db       11 seconds ago       Running             storage-provisioner       2                   4824b8ff43c7a       storage-provisioner                         kube-system
	bf5cb8e7988b9       07655ddf2eebe       47 seconds ago       Running             kubernetes-dashboard      0                   14302898507da       kubernetes-dashboard-b84665fb8-cvlw8        kubernetes-dashboard
	cde56ac9faaca       4921d7a6dffa9       55 seconds ago       Running             kindnet-cni               1                   d350d5de1305c       kindnet-dk8ws                               kube-system
	5e97c22029350       aa5e3ebc0dfed       55 seconds ago       Running             coredns                   1                   b4876a5895583       coredns-7d764666f9-9n78x                    kube-system
	ef612cadc0d38       56cc512116c8f       55 seconds ago       Running             busybox                   1                   7af2f6aeee5ce       busybox                                     default
	f1296a169eb21       6e38f40d628db       55 seconds ago       Exited              storage-provisioner       1                   4824b8ff43c7a       storage-provisioner                         kube-system
	69edbf00b076f       32652ff1bbe6b       55 seconds ago       Running             kube-proxy                1                   fa8b9412fa006       kube-proxy-mn4cz                            kube-system
	a561cde6d111b       2c9a4b058bd7e       58 seconds ago       Running             kube-controller-manager   1                   cf933a976189c       kube-controller-manager-no-preload-456925   kube-system
	7e0535e2846f3       5c6acd67e9cd1       58 seconds ago       Running             kube-apiserver            1                   11461e2a9fde5       kube-apiserver-no-preload-456925            kube-system
	aa74c485600b4       0a108f7189562       58 seconds ago       Running             etcd                      1                   b94fbfb614653       etcd-no-preload-456925                      kube-system
	2af04813f8003       550794e3b12ac       58 seconds ago       Running             kube-scheduler            1                   aa5cbdaabc12e       kube-scheduler-no-preload-456925            kube-system
	3d7012d3f21db       56cc512116c8f       About a minute ago   Exited              busybox                   0                   0d3b2e5c8de35       busybox                                     default
	23aa2b21e24e3       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   44580b6d1d020       coredns-7d764666f9-9n78x                    kube-system
	043ebb6277a22       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   a711c96bcac0f       kindnet-dk8ws                               kube-system
	de160001858cd       32652ff1bbe6b       About a minute ago   Exited              kube-proxy                0                   2b15f9b070221       kube-proxy-mn4cz                            kube-system
	279d491264063       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            0                   59fd23dcbf133       kube-apiserver-no-preload-456925            kube-system
	f28528104a734       0a108f7189562       About a minute ago   Exited              etcd                      0                   45cfdf2702257       etcd-no-preload-456925                      kube-system
	0a688a759adeb       550794e3b12ac       About a minute ago   Exited              kube-scheduler            0                   ab78a57fc4700       kube-scheduler-no-preload-456925            kube-system
	ea79d3f01ce4a       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   0                   eafabcdac4bc1       kube-controller-manager-no-preload-456925   kube-system
	
	
	==> containerd <==
	Dec 28 07:04:25 no-preload-456925 containerd[448]: time="2025-12-28T07:04:25.478251950Z" level=info msg="StartContainer for \"9123368677433e9f5ddefa4e3479f028568442ba461d66a62dffcc6d6ab209df\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.598077348Z" level=info msg="StopPodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.599319081Z" level=info msg="TearDown network for sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.599377926Z" level=info msg="StopPodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.600333949Z" level=info msg="RemovePodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.600372486Z" level=info msg="Forcibly stopping sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.601043532Z" level=info msg="TearDown network for sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.604662206Z" level=info msg="Ensure that sandbox c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0 in task-service has been cleanup successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.608547506Z" level=info msg="RemovePodSandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.609270370Z" level=info msg="StopPodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.635701567Z" level=info msg="TearDown network for sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.635766259Z" level=info msg="StopPodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.637832493Z" level=info msg="RemovePodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.638059576Z" level=info msg="Forcibly stopping sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.665442685Z" level=info msg="TearDown network for sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.668162592Z" level=info msg="Ensure that sandbox 9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730 in task-service has been cleanup successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.671860449Z" level=info msg="RemovePodSandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.855275383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.897300384Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.946910193Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.948236258Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.948258213Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.949376636Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.998838361Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.998912772Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               no-preload-456925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-456925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-456925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_02_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-456925
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-456925
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                903abed0-0c6b-40fa-b40c-70f3131d91ee
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-9n78x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-456925                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-dk8ws                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-456925              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-456925     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-mn4cz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-456925              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 metrics-server-5d785b57d4-j58xv               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         78s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-84lwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cvlw8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-456925 event: Registered Node no-preload-456925 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-456925 event: Registered Node no-preload-456925 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:04:37 up  3:47,  0 user,  load average: 2.34, 2.93, 10.73
	Linux no-preload-456925 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.833097    2384 kubelet_node_status.go:74] "Attempting to register node" node="no-preload-456925"
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.854315    2384 kubelet_node_status.go:123] "Node was previously registered" node="no-preload-456925"
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.854426    2384 kubelet_node_status.go:77] "Successfully registered node" node="no-preload-456925"
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.854464    2384 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.856986    2384 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.583470    2384 apiserver.go:52] "Watching apiserver"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.598768    2384 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614339    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb9008b-3ff4-46c3-9a73-02346d316ffe-xtables-lock\") pod \"kube-proxy-mn4cz\" (UID: \"bfb9008b-3ff4-46c3-9a73-02346d316ffe\") " pod="kube-system/kube-proxy-mn4cz"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614408    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-xtables-lock\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614436    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-lib-modules\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614477    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-cni-cfg\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614643    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb9008b-3ff4-46c3-9a73-02346d316ffe-lib-modules\") pod \"kube-proxy-mn4cz\" (UID: \"bfb9008b-3ff4-46c3-9a73-02346d316ffe\") " pod="kube-system/kube-proxy-mn4cz"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614724    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a354a4db-d6c9-48f0-8af3-a9b43170f760-tmp\") pod \"storage-provisioner\" (UID: \"a354a4db-d6c9-48f0-8af3-a9b43170f760\") " pod="kube-system/storage-provisioner"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.705920    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-456925" containerName="kube-controller-manager"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706011    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-456925" containerName="kube-scheduler"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706370    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-456925" containerName="kube-apiserver"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706500    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-456925" containerName="etcd"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.948614    2384 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.948741    2384 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.949126    2384 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-j58xv_kube-system(c99710d1-f46f-48c7-a13e-e9e499a79199): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" logger="UnhandledError"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.949197    2384 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-j58xv" podUID="c99710d1-f46f-48c7-a13e-e9e499a79199"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999176    2384 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999296    2384 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999600    2384 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-84lwq_kubernetes-dashboard(44cc5d60-86b0-4d7c-8b7f-a839e785e3df): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999673    2384 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-84lwq" podUID="44cc5d60-86b0-4d7c-8b7f-a839e785e3df"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456925 -n no-preload-456925
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-456925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq: exit status 1 (78.583082ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-j58xv" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-84lwq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-456925
helpers_test.go:244: (dbg) docker inspect no-preload-456925:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201",
	        "Created": "2025-12-28T07:02:21.77133894Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 873680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:32.562892545Z",
	            "FinishedAt": "2025-12-28T07:03:31.640759351Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/hosts",
	        "LogPath": "/var/lib/docker/containers/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201/8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201-json.log",
	        "Name": "/no-preload-456925",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-456925:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-456925",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8cefd7db6fd31061281db3bdce07f250da333a3e3365a1c9b7fcf389b1631201",
	                "LowerDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e5bbb928c3b324466ec7a49a68fe75c2a027b3122cecb6a3ff12ffcc67164b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-456925",
	                "Source": "/var/lib/docker/volumes/no-preload-456925/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-456925",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-456925",
	                "name.minikube.sigs.k8s.io": "no-preload-456925",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "268df8e5abee5b3cd47704904ff04b12dc84b734e029a5472bec4361f905066c",
	            "SandboxKey": "/var/run/docker/netns/268df8e5abee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-456925": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01bcffc32464b95dc919335f97af3e70ebdcd82b5f44169bb87577b18116c422",
	                    "EndpointID": "99f009e50c09cb15e7f38cbc1c01a61e1bd5cc560d32180202d4fd772c7f227a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:66:f5:70:a0:98",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-456925",
	                        "8cefd7db6fd3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456925 -n no-preload-456925
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-456925 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p stopped-upgrade-153407                                                                                                                                                                                                                           │ stopped-upgrade-153407       │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ delete  │ -p disable-driver-mounts-284795                                                                                                                                                                                                                     │ disable-driver-mounts-284795 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-456925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p no-preload-456925 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-805353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p old-k8s-version-805353 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p no-preload-456925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-805353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-982151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p embed-certs-982151 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:03:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:03:55.738532  882252 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:03:55.738689  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.738703  882252 out.go:374] Setting ErrFile to fd 2...
	I1228 07:03:55.738721  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.739259  882252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:03:55.740037  882252 out.go:368] Setting JSON to false
	I1228 07:03:55.742377  882252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13580,"bootTime":1766891856,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:03:55.742479  882252 start.go:143] virtualization: kvm guest
	I1228 07:03:55.744573  882252 out.go:179] * [default-k8s-diff-port-129908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:03:55.745969  882252 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:03:55.746036  882252 notify.go:221] Checking for updates...
	I1228 07:03:55.749018  882252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:03:55.750368  882252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:03:55.751423  882252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:03:55.752505  882252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:03:55.753752  882252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:03:55.755380  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:03:55.756046  882252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:03:55.782846  882252 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:03:55.782996  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.852117  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.840745048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.852300  882252 docker.go:319] overlay module found
	I1228 07:03:55.853848  882252 out.go:179] * Using the docker driver based on existing profile
	I1228 07:03:55.855042  882252 start.go:309] selected driver: docker
	I1228 07:03:55.855063  882252 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.855165  882252 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:03:55.855840  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.917473  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.906550203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.917793  882252 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:55.917933  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:03:55.918027  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:03:55.918113  882252 start.go:353] cluster config:
	{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.919833  882252 out.go:179] * Starting "default-k8s-diff-port-129908" primary control-plane node in "default-k8s-diff-port-129908" cluster
	I1228 07:03:55.920969  882252 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:03:55.922122  882252 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:03:55.923232  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:03:55.923274  882252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:03:55.923284  882252 cache.go:65] Caching tarball of preloaded images
	I1228 07:03:55.923341  882252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:03:55.923383  882252 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:03:55.923396  882252 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:03:55.923509  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:55.945420  882252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:03:55.945450  882252 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:03:55.945480  882252 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:03:55.945524  882252 start.go:360] acquireMachinesLock for default-k8s-diff-port-129908: {Name:mk66a28d31a5a7f03f0abd1dfec44af622c036e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:03:55.945595  882252 start.go:364] duration metric: took 45.236µs to acquireMachinesLock for "default-k8s-diff-port-129908"
	I1228 07:03:55.945619  882252 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:03:55.945629  882252 fix.go:54] fixHost starting: 
	I1228 07:03:55.945869  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:55.966941  882252 fix.go:112] recreateIfNeeded on default-k8s-diff-port-129908: state=Stopped err=<nil>
	W1228 07:03:55.966987  882252 fix.go:138] unexpected machine state, will restart: <nil>
	W1228 07:03:53.203811  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:03:55.206598  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:03:54.958080  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:03:54.958206  880223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:03:54.958289  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.961176  880223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:54.961312  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:03:54.961373  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.988639  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:54.999061  880223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:54.999103  880223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:03:54.999305  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:55.003431  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.007935  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.029653  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.081852  880223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:03:55.098405  880223 node_ready.go:35] waiting up to 6m0s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:55.112422  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:03:55.112450  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:03:55.121850  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:55.121890  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:03:55.121900  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:03:55.136569  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:03:55.136606  880223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:03:55.143729  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.147425  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:03:55.147451  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:03:55.172581  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.172615  880223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:03:55.176850  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:03:55.176888  880223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:03:55.210186  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:03:55.210579  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:03:55.215104  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.244553  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:03:55.244590  880223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:03:55.261806  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.261878  880223 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1228 07:03:55.261947  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.294458  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:03:55.294486  880223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:03:55.326344  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:03:55.326372  880223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:03:55.346621  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:03:55.346647  880223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:03:55.375997  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.376020  880223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:03:55.391494  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.441974  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.603856  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:57.038487  880223 node_ready.go:49] node "embed-certs-982151" is "Ready"
	I1228 07:03:57.038523  880223 node_ready.go:38] duration metric: took 1.940079394s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:57.038543  880223 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:03:57.038606  880223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:03:57.704736  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489588386s)
	I1228 07:03:57.704780  880223 addons.go:495] Verifying addon metrics-server=true in "embed-certs-982151"
	I1228 07:03:57.704836  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.313298327s)
	I1228 07:03:57.705109  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.263104063s)
	I1228 07:03:57.707172  880223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-982151 addons enable metrics-server
	
	I1228 07:03:57.716966  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11307566s)
	I1228 07:03:57.716984  880223 api_server.go:72] duration metric: took 2.803527403s to wait for apiserver process to appear ...
	I1228 07:03:57.717001  880223 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:03:57.717019  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:57.719801  880223 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1228 07:03:57.721544  880223 addons.go:530] duration metric: took 2.808027569s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1228 07:03:57.721710  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:57.721773  880223 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:53.930573  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:03:56.432760  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:03:55.969541  882252 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-129908" ...
	I1228 07:03:55.969643  882252 cli_runner.go:164] Run: docker start default-k8s-diff-port-129908
	I1228 07:03:56.285009  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:56.317632  882252 kic.go:430] container "default-k8s-diff-port-129908" state is running.
	I1228 07:03:56.318755  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:56.344384  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:56.344656  882252 machine.go:94] provisionDockerMachine start ...
	I1228 07:03:56.344759  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:56.367745  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:56.368021  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:56.368034  882252 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:03:56.368796  882252 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41308->127.0.0.1:33133: read: connection reset by peer
	I1228 07:03:59.512247  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.512276  882252 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-129908"
	I1228 07:03:59.512350  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.534401  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.534744  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.534766  882252 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-129908 && echo "default-k8s-diff-port-129908" | sudo tee /etc/hostname
	I1228 07:03:59.684180  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.684288  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.705307  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.705585  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.705613  882252 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-129908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-129908/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-129908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:03:59.844260  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:03:59.844297  882252 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:03:59.844323  882252 ubuntu.go:190] setting up certificates
	I1228 07:03:59.844346  882252 provision.go:84] configureAuth start
	I1228 07:03:59.844416  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:59.865173  882252 provision.go:143] copyHostCerts
	I1228 07:03:59.865247  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:03:59.865261  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:03:59.865342  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:03:59.865484  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:03:59.865498  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:03:59.865539  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:03:59.865612  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:03:59.865623  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:03:59.865658  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:03:59.865731  882252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-129908 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-129908 localhost minikube]
	I1228 07:03:59.897890  882252 provision.go:177] copyRemoteCerts
	I1228 07:03:59.897972  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:03:59.898024  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.918735  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.019603  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:04:00.042819  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:04:00.064302  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:04:00.085628  882252 provision.go:87] duration metric: took 241.249279ms to configureAuth
	I1228 07:04:00.085661  882252 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:04:00.085909  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:00.085931  882252 machine.go:97] duration metric: took 3.741255863s to provisionDockerMachine
	I1228 07:04:00.085943  882252 start.go:293] postStartSetup for "default-k8s-diff-port-129908" (driver="docker")
	I1228 07:04:00.085955  882252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:04:00.086021  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:04:00.086092  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.107294  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.213532  882252 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:04:00.217985  882252 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:04:00.218016  882252 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:04:00.218030  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:04:00.218175  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:04:00.218332  882252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:04:00.218449  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:04:00.227795  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:00.251291  882252 start.go:296] duration metric: took 165.331058ms for postStartSetup
	I1228 07:04:00.251386  882252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:04:00.251464  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.276621  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.373799  882252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:04:00.379539  882252 fix.go:56] duration metric: took 4.433903055s for fixHost
	I1228 07:04:00.379578  882252 start.go:83] releasing machines lock for "default-k8s-diff-port-129908", held for 4.433956892s
	I1228 07:04:00.379650  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:04:00.401020  882252 ssh_runner.go:195] Run: cat /version.json
	I1228 07:04:00.401076  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.401098  882252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:04:00.401197  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.423791  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.424146  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.518927  882252 ssh_runner.go:195] Run: systemctl --version
	I1228 07:04:00.588110  882252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:04:00.594268  882252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:04:00.594347  882252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:04:00.604690  882252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:04:00.604711  882252 start.go:496] detecting cgroup driver to use...
	I1228 07:04:00.604747  882252 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:04:00.604794  882252 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:04:00.626916  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:04:00.642927  882252 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:04:00.643006  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:04:00.661151  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:04:00.677071  882252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:04:00.782881  882252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:04:00.886938  882252 docker.go:234] disabling docker service ...
	I1228 07:04:00.887034  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:04:00.905648  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:04:00.922255  882252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:04:01.032767  882252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:04:01.151567  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:04:01.167546  882252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:04:01.184683  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:04:01.195723  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:04:01.206605  882252 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:04:01.206685  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:04:01.217347  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.227406  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:04:01.238026  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.252602  882252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:04:01.262955  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:04:01.274767  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:04:01.285746  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:04:01.296288  882252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:04:01.305400  882252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:04:01.314805  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.409989  882252 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:04:01.571109  882252 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:04:01.571191  882252 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:04:01.575916  882252 start.go:574] Will wait 60s for crictl version
	I1228 07:04:01.575986  882252 ssh_runner.go:195] Run: which crictl
	I1228 07:04:01.580123  882252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:04:01.609855  882252 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:04:01.609941  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.636563  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.664300  882252 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	W1228 07:03:57.704861  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:00.204893  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:01.666266  882252 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-129908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:04:01.685605  882252 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:04:01.690424  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.702737  882252 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:01.702926  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:01.702997  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.733786  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.733821  882252 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:01.733892  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.763427  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.763453  882252 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:01.763463  882252 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:04:01.763630  882252 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-129908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:01.763699  882252 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:01.793899  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:04:01.793927  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:01.793950  882252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:04:01.793979  882252 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-129908 NodeName:default-k8s-diff-port-129908 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:01.794135  882252 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-129908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:01.794234  882252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:01.803826  882252 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:01.803923  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:01.814997  882252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:04:01.830520  882252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:01.847094  882252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1228 07:04:01.862575  882252 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:01.867240  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.879762  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.981753  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:02.008768  882252 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908 for IP: 192.168.85.2
	I1228 07:04:02.008791  882252 certs.go:195] generating shared ca certs ...
	I1228 07:04:02.008811  882252 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.008980  882252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:02.009054  882252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:02.009079  882252 certs.go:257] generating profile certs ...
	I1228 07:04:02.009241  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/client.key
	I1228 07:04:02.009336  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key.e6891321
	I1228 07:04:02.009417  882252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key
	I1228 07:04:02.009566  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:02.009614  882252 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:02.009629  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:02.009669  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:02.009721  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:02.009751  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:02.009804  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:02.010516  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:02.030969  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:02.050983  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:02.071754  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:02.095835  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:04:02.119207  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:04:02.138541  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:02.156606  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:04:02.175252  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:02.193782  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:02.213892  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:02.232418  882252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:02.246379  882252 ssh_runner.go:195] Run: openssl version
	I1228 07:04:02.253696  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.261337  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:02.268782  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272460  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272502  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.309757  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:02.318273  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.327328  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:02.336776  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341018  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341065  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.375936  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:02.383700  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.391577  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:02.399167  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.402932  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.403000  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.441765  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:02.450114  882252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:02.454467  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:04:02.490848  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:04:02.528538  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:04:02.574210  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:04:02.634863  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:04:02.688071  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:04:02.737549  882252 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:02.737745  882252 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:04:02.763979  882252 cri.go:83] list returned 8 containers
	I1228 07:04:02.764053  882252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:02.776746  882252 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:04:02.776774  882252 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:04:02.776824  882252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:04:02.788129  882252 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:04:02.789314  882252 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-129908" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.789957  882252 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-129908" cluster setting kubeconfig missing "default-k8s-diff-port-129908" context setting]
	I1228 07:04:02.793405  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.796509  882252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:04:02.810377  882252 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:04:02.810427  882252 kubeadm.go:602] duration metric: took 33.643458ms to restartPrimaryControlPlane
	I1228 07:04:02.810439  882252 kubeadm.go:403] duration metric: took 72.900379ms to StartCluster
	I1228 07:04:02.810463  882252 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.810543  882252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.814033  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.814425  882252 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:04:02.814668  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:02.814737  882252 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:04:02.814823  882252 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.814837  882252 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.814844  882252 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:04:02.814873  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815322  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.815673  882252 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815706  882252 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-129908"
	I1228 07:04:02.815709  882252 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815744  882252 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.815758  882252 addons.go:248] addon dashboard should already be in state true
	I1228 07:04:02.815802  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815974  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816174  882252 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.816207  882252 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.816231  882252 addons.go:248] addon metrics-server should already be in state true
	I1228 07:04:02.816264  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.816395  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816719  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.819436  882252 out.go:179] * Verifying Kubernetes components...
	I1228 07:04:02.823186  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:02.861166  882252 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:04:02.861203  882252 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:04:02.862499  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:04:02.862519  882252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:04:02.862547  882252 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:02.862563  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:04:02.862596  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.862620  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.863967  882252 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.863988  882252 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:04:02.864027  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.864484  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.872650  882252 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:04:02.877275  882252 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:03:58.217522  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:58.222483  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 07:03:58.223414  880223 api_server.go:141] control plane version: v1.35.0
	I1228 07:03:58.223442  880223 api_server.go:131] duration metric: took 506.434422ms to wait for apiserver health ...
	I1228 07:03:58.223451  880223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:03:58.227321  880223 system_pods.go:59] 9 kube-system pods found
	I1228 07:03:58.227348  880223 system_pods.go:61] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.227355  880223 system_pods.go:61] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.227362  880223 system_pods.go:61] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.227377  880223 system_pods.go:61] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.227387  880223 system_pods.go:61] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.227393  880223 system_pods.go:61] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.227400  880223 system_pods.go:61] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.227407  880223 system_pods.go:61] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.227416  880223 system_pods.go:61] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.227424  880223 system_pods.go:74] duration metric: took 3.965842ms to wait for pod list to return data ...
	I1228 07:03:58.227433  880223 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:03:58.229720  880223 default_sa.go:45] found service account: "default"
	I1228 07:03:58.229740  880223 default_sa.go:55] duration metric: took 2.300807ms for default service account to be created ...
	I1228 07:03:58.229747  880223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:03:58.299736  880223 system_pods.go:86] 9 kube-system pods found
	I1228 07:03:58.299772  880223 system_pods.go:89] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.299780  880223 system_pods.go:89] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.299787  880223 system_pods.go:89] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.299793  880223 system_pods.go:89] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.299798  880223 system_pods.go:89] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.299804  880223 system_pods.go:89] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.299809  880223 system_pods.go:89] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.299816  880223 system_pods.go:89] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.299823  880223 system_pods.go:89] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.299833  880223 system_pods.go:126] duration metric: took 70.080198ms to wait for k8s-apps to be running ...
	I1228 07:03:58.299847  880223 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:03:58.299903  880223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:03:58.316616  880223 system_svc.go:56] duration metric: took 16.755637ms WaitForService to wait for kubelet
	I1228 07:03:58.316644  880223 kubeadm.go:587] duration metric: took 3.40319134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:58.316662  880223 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:03:58.319428  880223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:03:58.319454  880223 node_conditions.go:123] node cpu capacity is 8
	I1228 07:03:58.319469  880223 node_conditions.go:105] duration metric: took 2.802451ms to run NodePressure ...
	I1228 07:03:58.319480  880223 start.go:242] waiting for startup goroutines ...
	I1228 07:03:58.319487  880223 start.go:247] waiting for cluster config update ...
	I1228 07:03:58.319498  880223 start.go:256] writing updated cluster config ...
	I1228 07:03:58.319774  880223 ssh_runner.go:195] Run: rm -f paused
	I1228 07:03:58.324556  880223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:03:58.327768  880223 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s8grm" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:00.333468  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:02.334146  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:03:58.931292  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:00.931470  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:02.938234  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:02.878697  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:04:02.878726  882252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:04:02.878799  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.894522  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.906490  882252 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:02.906522  882252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:04:02.906593  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.906643  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.913956  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.933719  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:03.007313  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:03.020758  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:04:03.020783  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:04:03.025468  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:03.025821  882252 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:03.029952  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:04:03.029974  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:04:03.039959  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:04:03.039983  882252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:04:03.048956  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:04:03.048979  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:04:03.049792  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:03.059613  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.059634  882252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:04:03.069470  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:04:03.069493  882252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:04:03.079512  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.090099  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:04:03.090125  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:04:03.109191  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:04:03.109228  882252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:04:03.127327  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:04:03.127354  882252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:04:03.145332  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:04:03.145362  882252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:04:03.161020  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:04:03.161041  882252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:04:03.175283  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:03.175302  882252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:04:03.190565  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:04.366811  882252 node_ready.go:49] node "default-k8s-diff-port-129908" is "Ready"
	I1228 07:04:04.366854  882252 node_ready.go:38] duration metric: took 1.340986184s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:04.366876  882252 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:04:04.366953  882252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:04:05.079411  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.05389931s)
	I1228 07:04:05.079504  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029690998s)
	I1228 07:04:05.079781  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.000232104s)
	I1228 07:04:05.079814  882252 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-129908"
	I1228 07:04:05.079947  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889324162s)
	I1228 07:04:05.080255  882252 api_server.go:72] duration metric: took 2.265784615s to wait for apiserver process to appear ...
	I1228 07:04:05.080277  882252 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:04:05.080339  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.082924  882252 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-129908 addons enable metrics-server
	
	I1228 07:04:05.086253  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:04:05.086281  882252 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:04:05.088197  882252 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1228 07:04:05.089519  882252 addons.go:530] duration metric: took 2.27478482s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1228 07:04:05.581379  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.586973  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1228 07:04:05.588678  882252 api_server.go:141] control plane version: v1.35.0
	I1228 07:04:05.588713  882252 api_server.go:131] duration metric: took 508.427311ms to wait for apiserver health ...
	I1228 07:04:05.588726  882252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:04:05.592639  882252 system_pods.go:59] 9 kube-system pods found
	I1228 07:04:05.592689  882252 system_pods.go:61] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.592702  882252 system_pods.go:61] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.592722  882252 system_pods.go:61] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.592730  882252 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.592740  882252 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.592750  882252 system_pods.go:61] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.592758  882252 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.592765  882252 system_pods.go:61] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.592772  882252 system_pods.go:61] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.592784  882252 system_pods.go:74] duration metric: took 4.051269ms to wait for pod list to return data ...
	I1228 07:04:05.592793  882252 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:04:05.595925  882252 default_sa.go:45] found service account: "default"
	I1228 07:04:05.595948  882252 default_sa.go:55] duration metric: took 3.147858ms for default service account to be created ...
	I1228 07:04:05.595959  882252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:04:05.601261  882252 system_pods.go:86] 9 kube-system pods found
	I1228 07:04:05.601357  882252 system_pods.go:89] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.601384  882252 system_pods.go:89] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.601428  882252 system_pods.go:89] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.601469  882252 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.601511  882252 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.601549  882252 system_pods.go:89] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.601594  882252 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.601633  882252 system_pods.go:89] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.601672  882252 system_pods.go:89] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.601685  882252 system_pods.go:126] duration metric: took 5.718923ms to wait for k8s-apps to be running ...
	I1228 07:04:05.601696  882252 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:04:05.601792  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:04:05.633925  882252 system_svc.go:56] duration metric: took 32.2184ms WaitForService to wait for kubelet
	I1228 07:04:05.633962  882252 kubeadm.go:587] duration metric: took 2.819493554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:04:05.633987  882252 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:04:05.639517  882252 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:04:05.639550  882252 node_conditions.go:123] node cpu capacity is 8
	I1228 07:04:05.639569  882252 node_conditions.go:105] duration metric: took 5.575875ms to run NodePressure ...
	I1228 07:04:05.639586  882252 start.go:242] waiting for startup goroutines ...
	I1228 07:04:05.639597  882252 start.go:247] waiting for cluster config update ...
	I1228 07:04:05.639614  882252 start.go:256] writing updated cluster config ...
	I1228 07:04:05.639915  882252 ssh_runner.go:195] Run: rm -f paused
	I1228 07:04:05.647014  882252 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:05.659239  882252 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mbfzh" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:02.704180  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.704962  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:07.203308  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.335906  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:06.878878  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:05.434776  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.931491  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.670978  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:10.165524  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:09.204178  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:11.703509  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:09.333196  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:11.333675  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:10.430946  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.431254  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.166364  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.665182  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.203171  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:16.203563  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:13.334174  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:15.833412  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:17.833543  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:14.931067  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:17.431207  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:16.665304  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:19.164406  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:18.204086  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:20.703420  873440 pod_ready.go:94] pod "coredns-7d764666f9-9n78x" is "Ready"
	I1228 07:04:20.703450  873440 pod_ready.go:86] duration metric: took 38.005418075s for pod "coredns-7d764666f9-9n78x" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.705734  873440 pod_ready.go:83] waiting for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.709107  873440 pod_ready.go:94] pod "etcd-no-preload-456925" is "Ready"
	I1228 07:04:20.709130  873440 pod_ready.go:86] duration metric: took 3.373198ms for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.711055  873440 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.714256  873440 pod_ready.go:94] pod "kube-apiserver-no-preload-456925" is "Ready"
	I1228 07:04:20.714278  873440 pod_ready.go:86] duration metric: took 3.20057ms for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.715898  873440 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.901759  873440 pod_ready.go:94] pod "kube-controller-manager-no-preload-456925" is "Ready"
	I1228 07:04:20.901785  873440 pod_ready.go:86] duration metric: took 185.864424ms for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.101880  873440 pod_ready.go:83] waiting for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.501912  873440 pod_ready.go:94] pod "kube-proxy-mn4cz" is "Ready"
	I1228 07:04:21.501939  873440 pod_ready.go:86] duration metric: took 400.033432ms for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.701807  873440 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102084  873440 pod_ready.go:94] pod "kube-scheduler-no-preload-456925" is "Ready"
	I1228 07:04:22.102117  873440 pod_ready.go:86] duration metric: took 400.282919ms for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102133  873440 pod_ready.go:40] duration metric: took 39.409345661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:22.149656  873440 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:04:22.151334  873440 out.go:179] * Done! kubectl is now configured to use "no-preload-456925" cluster and "default" namespace by default
	W1228 07:04:20.333502  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:22.336156  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:19.930823  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:21.932881  874073 pod_ready.go:94] pod "coredns-5dd5756b68-kcdsc" is "Ready"
	I1228 07:04:21.932918  874073 pod_ready.go:86] duration metric: took 37.007863966s for pod "coredns-5dd5756b68-kcdsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.935882  874073 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.939908  874073 pod_ready.go:94] pod "etcd-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.939936  874073 pod_ready.go:86] duration metric: took 4.02365ms for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.942466  874073 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.946100  874073 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.946121  874073 pod_ready.go:86] duration metric: took 3.628428ms for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.948541  874073 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.129399  874073 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-805353" is "Ready"
	I1228 07:04:22.129426  874073 pod_ready.go:86] duration metric: took 180.865961ms for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.330689  874073 pod_ready.go:83] waiting for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.729871  874073 pod_ready.go:94] pod "kube-proxy-sd5kh" is "Ready"
	I1228 07:04:22.729898  874073 pod_ready.go:86] duration metric: took 399.179627ms for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.929709  874073 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329353  874073 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-805353" is "Ready"
	I1228 07:04:23.329386  874073 pod_ready.go:86] duration metric: took 399.644333ms for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329402  874073 pod_ready.go:40] duration metric: took 38.409544453s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:23.376490  874073 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 07:04:23.378140  874073 out.go:203] 
	W1228 07:04:23.379347  874073 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:04:23.380351  874073 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:04:23.381580  874073 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-805353" cluster and "default" namespace by default
	W1228 07:04:21.665046  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:23.666729  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:24.833321  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.834192  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.164189  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:28.164450  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:30.164565  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:29.334446  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:31.833411  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:32.664464  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:34.665842  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:33.833511  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:36.335200  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9123368677433       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   4824b8ff43c7a       storage-provisioner                         kube-system
	bf5cb8e7988b9       07655ddf2eebe       49 seconds ago       Running             kubernetes-dashboard      0                   14302898507da       kubernetes-dashboard-b84665fb8-cvlw8        kubernetes-dashboard
	cde56ac9faaca       4921d7a6dffa9       56 seconds ago       Running             kindnet-cni               1                   d350d5de1305c       kindnet-dk8ws                               kube-system
	5e97c22029350       aa5e3ebc0dfed       57 seconds ago       Running             coredns                   1                   b4876a5895583       coredns-7d764666f9-9n78x                    kube-system
	ef612cadc0d38       56cc512116c8f       57 seconds ago       Running             busybox                   1                   7af2f6aeee5ce       busybox                                     default
	f1296a169eb21       6e38f40d628db       57 seconds ago       Exited              storage-provisioner       1                   4824b8ff43c7a       storage-provisioner                         kube-system
	69edbf00b076f       32652ff1bbe6b       57 seconds ago       Running             kube-proxy                1                   fa8b9412fa006       kube-proxy-mn4cz                            kube-system
	a561cde6d111b       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   1                   cf933a976189c       kube-controller-manager-no-preload-456925   kube-system
	7e0535e2846f3       5c6acd67e9cd1       About a minute ago   Running             kube-apiserver            1                   11461e2a9fde5       kube-apiserver-no-preload-456925            kube-system
	aa74c485600b4       0a108f7189562       About a minute ago   Running             etcd                      1                   b94fbfb614653       etcd-no-preload-456925                      kube-system
	2af04813f8003       550794e3b12ac       About a minute ago   Running             kube-scheduler            1                   aa5cbdaabc12e       kube-scheduler-no-preload-456925            kube-system
	3d7012d3f21db       56cc512116c8f       About a minute ago   Exited              busybox                   0                   0d3b2e5c8de35       busybox                                     default
	23aa2b21e24e3       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   44580b6d1d020       coredns-7d764666f9-9n78x                    kube-system
	043ebb6277a22       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   a711c96bcac0f       kindnet-dk8ws                               kube-system
	de160001858cd       32652ff1bbe6b       About a minute ago   Exited              kube-proxy                0                   2b15f9b070221       kube-proxy-mn4cz                            kube-system
	279d491264063       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            0                   59fd23dcbf133       kube-apiserver-no-preload-456925            kube-system
	f28528104a734       0a108f7189562       About a minute ago   Exited              etcd                      0                   45cfdf2702257       etcd-no-preload-456925                      kube-system
	0a688a759adeb       550794e3b12ac       About a minute ago   Exited              kube-scheduler            0                   ab78a57fc4700       kube-scheduler-no-preload-456925            kube-system
	ea79d3f01ce4a       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   0                   eafabcdac4bc1       kube-controller-manager-no-preload-456925   kube-system
	
	
	==> containerd <==
	Dec 28 07:04:25 no-preload-456925 containerd[448]: time="2025-12-28T07:04:25.478251950Z" level=info msg="StartContainer for \"9123368677433e9f5ddefa4e3479f028568442ba461d66a62dffcc6d6ab209df\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.598077348Z" level=info msg="StopPodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.599319081Z" level=info msg="TearDown network for sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.599377926Z" level=info msg="StopPodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.600333949Z" level=info msg="RemovePodSandbox for \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.600372486Z" level=info msg="Forcibly stopping sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.601043532Z" level=info msg="TearDown network for sandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.604662206Z" level=info msg="Ensure that sandbox c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0 in task-service has been cleanup successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.608547506Z" level=info msg="RemovePodSandbox \"c3991febbd61bbd33aa210a74932ffa5b28ec0a873c81234b4a465b42a9cf5a0\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.609270370Z" level=info msg="StopPodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.635701567Z" level=info msg="TearDown network for sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.635766259Z" level=info msg="StopPodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.637832493Z" level=info msg="RemovePodSandbox for \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.638059576Z" level=info msg="Forcibly stopping sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\""
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.665442685Z" level=info msg="TearDown network for sandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.668162592Z" level=info msg="Ensure that sandbox 9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730 in task-service has been cleanup successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.671860449Z" level=info msg="RemovePodSandbox \"9aa6e4039033056c6a5a6fd0bfd0cfbae6c1e3d3164c0d117ec63a00266f5730\" returns successfully"
	Dec 28 07:04:35 no-preload-456925 containerd[448]: time="2025-12-28T07:04:35.855275383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.897300384Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.946910193Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.948236258Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.948258213Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.949376636Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.998838361Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:04:36 no-preload-456925 containerd[448]: time="2025-12-28T07:04:36.998912772Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               no-preload-456925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-456925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-456925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_02_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-456925
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:04:35 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-456925
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                903abed0-0c6b-40fa-b40c-70f3131d91ee
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-9n78x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-456925                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-dk8ws                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-456925              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-456925     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-mn4cz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-456925              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 metrics-server-5d785b57d4-j58xv               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         80s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-84lwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cvlw8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-456925 event: Registered Node no-preload-456925 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-456925 event: Registered Node no-preload-456925 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:04:39 up  3:47,  0 user,  load average: 2.34, 2.93, 10.73
	Linux no-preload-456925 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.854464    2384 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 28 07:04:35 no-preload-456925 kubelet[2384]: I1228 07:04:35.856986    2384 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.583470    2384 apiserver.go:52] "Watching apiserver"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.598768    2384 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614339    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb9008b-3ff4-46c3-9a73-02346d316ffe-xtables-lock\") pod \"kube-proxy-mn4cz\" (UID: \"bfb9008b-3ff4-46c3-9a73-02346d316ffe\") " pod="kube-system/kube-proxy-mn4cz"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614408    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-xtables-lock\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614436    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-lib-modules\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614477    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eb39c84f-7933-4fe0-b2c7-5856f267f779-cni-cfg\") pod \"kindnet-dk8ws\" (UID: \"eb39c84f-7933-4fe0-b2c7-5856f267f779\") " pod="kube-system/kindnet-dk8ws"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614643    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb9008b-3ff4-46c3-9a73-02346d316ffe-lib-modules\") pod \"kube-proxy-mn4cz\" (UID: \"bfb9008b-3ff4-46c3-9a73-02346d316ffe\") " pod="kube-system/kube-proxy-mn4cz"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: I1228 07:04:36.614724    2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a354a4db-d6c9-48f0-8af3-a9b43170f760-tmp\") pod \"storage-provisioner\" (UID: \"a354a4db-d6c9-48f0-8af3-a9b43170f760\") " pod="kube-system/storage-provisioner"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.705920    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-456925" containerName="kube-controller-manager"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706011    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-456925" containerName="kube-scheduler"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706370    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-456925" containerName="kube-apiserver"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.706500    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-456925" containerName="etcd"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.948614    2384 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.948741    2384 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.949126    2384 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-j58xv_kube-system(c99710d1-f46f-48c7-a13e-e9e499a79199): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" logger="UnhandledError"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.949197    2384 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-j58xv" podUID="c99710d1-f46f-48c7-a13e-e9e499a79199"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999176    2384 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999296    2384 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999600    2384 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-84lwq_kubernetes-dashboard(44cc5d60-86b0-4d7c-8b7f-a839e785e3df): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:04:36 no-preload-456925 kubelet[2384]: E1228 07:04:36.999673    2384 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-84lwq" podUID="44cc5d60-86b0-4d7c-8b7f-a839e785e3df"
	Dec 28 07:04:37 no-preload-456925 kubelet[2384]: E1228 07:04:37.709313    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-456925" containerName="etcd"
	Dec 28 07:04:37 no-preload-456925 kubelet[2384]: E1228 07:04:37.709436    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-456925" containerName="kube-scheduler"
	Dec 28 07:04:37 no-preload-456925 kubelet[2384]: E1228 07:04:37.709550    2384 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-456925" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456925 -n no-preload-456925
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-456925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq: exit status 1 (68.475121ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-j58xv" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-84lwq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-456925 describe pod metrics-server-5d785b57d4-j58xv dashboard-metrics-scraper-867fb5f87b-84lwq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-805353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805353 -n old-k8s-version-805353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805353 -n old-k8s-version-805353: exit status 2 (399.788877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805353 -n old-k8s-version-805353
E1228 07:04:36.135807  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805353 -n old-k8s-version-805353: exit status 2 (365.16934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-805353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805353 -n old-k8s-version-805353
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805353 -n old-k8s-version-805353
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-805353
helpers_test.go:244: (dbg) docker inspect old-k8s-version-805353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9",
	        "Created": "2025-12-28T07:02:25.634018546Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 874370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:33.52297728Z",
	            "FinishedAt": "2025-12-28T07:03:32.540399369Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/hosts",
	        "LogPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9-json.log",
	        "Name": "/old-k8s-version-805353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-805353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-805353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9",
	                "LowerDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/merged",
	                "UpperDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/diff",
	                "WorkDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-805353",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-805353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-805353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-805353",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-805353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c8cbac74bc625ae94cf15632d144f827e91ea376b8aca964d3a55637e6c3c255",
	            "SandboxKey": "/var/run/docker/netns/c8cbac74bc62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-805353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e34afc1724f9ec05151f68e362a6a5ad479b128bd03dbc2c9ee16903f92b971d",
	                    "EndpointID": "9df85c6d98082c54bf3a28ae3f5e1a8b968d7df9a34d9c06fec542e99a2333f3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:75:cf:6f:aa:ba",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-805353",
	                        "c25d9621e8a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805353 -n old-k8s-version-805353
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-805353 logs -n 25
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p stopped-upgrade-153407                                                                                                                                                                                                                           │ stopped-upgrade-153407       │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ delete  │ -p disable-driver-mounts-284795                                                                                                                                                                                                                     │ disable-driver-mounts-284795 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-456925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p no-preload-456925 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-805353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p old-k8s-version-805353 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p no-preload-456925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-805353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-982151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p embed-certs-982151 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:03:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:03:55.738532  882252 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:03:55.738689  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.738703  882252 out.go:374] Setting ErrFile to fd 2...
	I1228 07:03:55.738721  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.739259  882252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:03:55.740037  882252 out.go:368] Setting JSON to false
	I1228 07:03:55.742377  882252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13580,"bootTime":1766891856,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:03:55.742479  882252 start.go:143] virtualization: kvm guest
	I1228 07:03:55.744573  882252 out.go:179] * [default-k8s-diff-port-129908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:03:55.745969  882252 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:03:55.746036  882252 notify.go:221] Checking for updates...
	I1228 07:03:55.749018  882252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:03:55.750368  882252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:03:55.751423  882252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:03:55.752505  882252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:03:55.753752  882252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:03:55.755380  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:03:55.756046  882252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:03:55.782846  882252 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:03:55.782996  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.852117  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.840745048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.852300  882252 docker.go:319] overlay module found
	I1228 07:03:55.853848  882252 out.go:179] * Using the docker driver based on existing profile
	I1228 07:03:55.855042  882252 start.go:309] selected driver: docker
	I1228 07:03:55.855063  882252 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.855165  882252 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:03:55.855840  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.917473  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.906550203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.917793  882252 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:55.917933  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:03:55.918027  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:03:55.918113  882252 start.go:353] cluster config:
	{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.919833  882252 out.go:179] * Starting "default-k8s-diff-port-129908" primary control-plane node in "default-k8s-diff-port-129908" cluster
	I1228 07:03:55.920969  882252 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:03:55.922122  882252 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:03:55.923232  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:03:55.923274  882252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:03:55.923284  882252 cache.go:65] Caching tarball of preloaded images
	I1228 07:03:55.923341  882252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:03:55.923383  882252 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:03:55.923396  882252 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:03:55.923509  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:55.945420  882252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:03:55.945450  882252 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:03:55.945480  882252 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:03:55.945524  882252 start.go:360] acquireMachinesLock for default-k8s-diff-port-129908: {Name:mk66a28d31a5a7f03f0abd1dfec44af622c036e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:03:55.945595  882252 start.go:364] duration metric: took 45.236µs to acquireMachinesLock for "default-k8s-diff-port-129908"
	I1228 07:03:55.945619  882252 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:03:55.945629  882252 fix.go:54] fixHost starting: 
	I1228 07:03:55.945869  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:55.966941  882252 fix.go:112] recreateIfNeeded on default-k8s-diff-port-129908: state=Stopped err=<nil>
	W1228 07:03:55.966987  882252 fix.go:138] unexpected machine state, will restart: <nil>
	W1228 07:03:53.203811  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:03:55.206598  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:03:54.958080  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:03:54.958206  880223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:03:54.958289  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.961176  880223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:54.961312  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:03:54.961373  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.988639  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:54.999061  880223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:54.999103  880223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:03:54.999305  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:55.003431  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.007935  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.029653  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.081852  880223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:03:55.098405  880223 node_ready.go:35] waiting up to 6m0s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:55.112422  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:03:55.112450  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:03:55.121850  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:55.121890  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:03:55.121900  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:03:55.136569  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:03:55.136606  880223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:03:55.143729  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.147425  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:03:55.147451  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:03:55.172581  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.172615  880223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:03:55.176850  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:03:55.176888  880223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:03:55.210186  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:03:55.210579  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:03:55.215104  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.244553  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:03:55.244590  880223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:03:55.261806  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.261878  880223 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1228 07:03:55.261947  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.294458  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:03:55.294486  880223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:03:55.326344  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:03:55.326372  880223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:03:55.346621  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:03:55.346647  880223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:03:55.375997  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.376020  880223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:03:55.391494  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.441974  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.603856  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:57.038487  880223 node_ready.go:49] node "embed-certs-982151" is "Ready"
	I1228 07:03:57.038523  880223 node_ready.go:38] duration metric: took 1.940079394s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:57.038543  880223 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:03:57.038606  880223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:03:57.704736  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489588386s)
	I1228 07:03:57.704780  880223 addons.go:495] Verifying addon metrics-server=true in "embed-certs-982151"
	I1228 07:03:57.704836  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.313298327s)
	I1228 07:03:57.705109  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.263104063s)
	I1228 07:03:57.707172  880223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-982151 addons enable metrics-server
	
	I1228 07:03:57.716966  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11307566s)
	I1228 07:03:57.716984  880223 api_server.go:72] duration metric: took 2.803527403s to wait for apiserver process to appear ...
	I1228 07:03:57.717001  880223 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:03:57.717019  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:57.719801  880223 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1228 07:03:57.721544  880223 addons.go:530] duration metric: took 2.808027569s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1228 07:03:57.721710  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:57.721773  880223 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:53.930573  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:03:56.432760  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:03:55.969541  882252 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-129908" ...
	I1228 07:03:55.969643  882252 cli_runner.go:164] Run: docker start default-k8s-diff-port-129908
	I1228 07:03:56.285009  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:56.317632  882252 kic.go:430] container "default-k8s-diff-port-129908" state is running.
	I1228 07:03:56.318755  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:56.344384  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:56.344656  882252 machine.go:94] provisionDockerMachine start ...
	I1228 07:03:56.344759  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:56.367745  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:56.368021  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:56.368034  882252 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:03:56.368796  882252 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41308->127.0.0.1:33133: read: connection reset by peer
	I1228 07:03:59.512247  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.512276  882252 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-129908"
	I1228 07:03:59.512350  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.534401  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.534744  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.534766  882252 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-129908 && echo "default-k8s-diff-port-129908" | sudo tee /etc/hostname
	I1228 07:03:59.684180  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.684288  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.705307  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.705585  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.705613  882252 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-129908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-129908/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-129908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:03:59.844260  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:03:59.844297  882252 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:03:59.844323  882252 ubuntu.go:190] setting up certificates
	I1228 07:03:59.844346  882252 provision.go:84] configureAuth start
	I1228 07:03:59.844416  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:59.865173  882252 provision.go:143] copyHostCerts
	I1228 07:03:59.865247  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:03:59.865261  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:03:59.865342  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:03:59.865484  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:03:59.865498  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:03:59.865539  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:03:59.865612  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:03:59.865623  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:03:59.865658  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:03:59.865731  882252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-129908 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-129908 localhost minikube]
	I1228 07:03:59.897890  882252 provision.go:177] copyRemoteCerts
	I1228 07:03:59.897972  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:03:59.898024  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.918735  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.019603  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:04:00.042819  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:04:00.064302  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:04:00.085628  882252 provision.go:87] duration metric: took 241.249279ms to configureAuth
	I1228 07:04:00.085661  882252 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:04:00.085909  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:00.085931  882252 machine.go:97] duration metric: took 3.741255863s to provisionDockerMachine
	I1228 07:04:00.085943  882252 start.go:293] postStartSetup for "default-k8s-diff-port-129908" (driver="docker")
	I1228 07:04:00.085955  882252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:04:00.086021  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:04:00.086092  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.107294  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.213532  882252 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:04:00.217985  882252 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:04:00.218016  882252 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:04:00.218030  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:04:00.218175  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:04:00.218332  882252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:04:00.218449  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:04:00.227795  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:00.251291  882252 start.go:296] duration metric: took 165.331058ms for postStartSetup
	I1228 07:04:00.251386  882252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:04:00.251464  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.276621  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.373799  882252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:04:00.379539  882252 fix.go:56] duration metric: took 4.433903055s for fixHost
	I1228 07:04:00.379578  882252 start.go:83] releasing machines lock for "default-k8s-diff-port-129908", held for 4.433956892s
	I1228 07:04:00.379650  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:04:00.401020  882252 ssh_runner.go:195] Run: cat /version.json
	I1228 07:04:00.401076  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.401098  882252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:04:00.401197  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.423791  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.424146  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.518927  882252 ssh_runner.go:195] Run: systemctl --version
	I1228 07:04:00.588110  882252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:04:00.594268  882252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:04:00.594347  882252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:04:00.604690  882252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:04:00.604711  882252 start.go:496] detecting cgroup driver to use...
	I1228 07:04:00.604747  882252 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:04:00.604794  882252 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:04:00.626916  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:04:00.642927  882252 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:04:00.643006  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:04:00.661151  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:04:00.677071  882252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:04:00.782881  882252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:04:00.886938  882252 docker.go:234] disabling docker service ...
	I1228 07:04:00.887034  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:04:00.905648  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:04:00.922255  882252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:04:01.032767  882252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:04:01.151567  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:04:01.167546  882252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:04:01.184683  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:04:01.195723  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:04:01.206605  882252 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:04:01.206685  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:04:01.217347  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.227406  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:04:01.238026  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.252602  882252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:04:01.262955  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:04:01.274767  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:04:01.285746  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:04:01.296288  882252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:04:01.305400  882252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:04:01.314805  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.409989  882252 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:04:01.571109  882252 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:04:01.571191  882252 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:04:01.575916  882252 start.go:574] Will wait 60s for crictl version
	I1228 07:04:01.575986  882252 ssh_runner.go:195] Run: which crictl
	I1228 07:04:01.580123  882252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:04:01.609855  882252 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:04:01.609941  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.636563  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.664300  882252 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	W1228 07:03:57.704861  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:00.204893  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:01.666266  882252 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-129908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:04:01.685605  882252 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:04:01.690424  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.702737  882252 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:01.702926  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:01.702997  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.733786  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.733821  882252 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:01.733892  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.763427  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.763453  882252 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:01.763463  882252 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:04:01.763630  882252 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-129908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:01.763699  882252 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:01.793899  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:04:01.793927  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:01.793950  882252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:04:01.793979  882252 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-129908 NodeName:default-k8s-diff-port-129908 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:01.794135  882252 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-129908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:01.794234  882252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:01.803826  882252 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:01.803923  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:01.814997  882252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:04:01.830520  882252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:01.847094  882252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1228 07:04:01.862575  882252 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:01.867240  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.879762  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.981753  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:02.008768  882252 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908 for IP: 192.168.85.2
	I1228 07:04:02.008791  882252 certs.go:195] generating shared ca certs ...
	I1228 07:04:02.008811  882252 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.008980  882252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:02.009054  882252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:02.009079  882252 certs.go:257] generating profile certs ...
	I1228 07:04:02.009241  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/client.key
	I1228 07:04:02.009336  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key.e6891321
	I1228 07:04:02.009417  882252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key
	I1228 07:04:02.009566  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:02.009614  882252 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:02.009629  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:02.009669  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:02.009721  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:02.009751  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:02.009804  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:02.010516  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:02.030969  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:02.050983  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:02.071754  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:02.095835  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:04:02.119207  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:04:02.138541  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:02.156606  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:04:02.175252  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:02.193782  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:02.213892  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:02.232418  882252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:02.246379  882252 ssh_runner.go:195] Run: openssl version
	I1228 07:04:02.253696  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.261337  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:02.268782  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272460  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272502  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.309757  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:02.318273  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.327328  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:02.336776  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341018  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341065  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.375936  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:02.383700  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.391577  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:02.399167  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.402932  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.403000  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.441765  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:02.450114  882252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:02.454467  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:04:02.490848  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:04:02.528538  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:04:02.574210  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:04:02.634863  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:04:02.688071  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:04:02.737549  882252 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:02.737745  882252 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:04:02.763979  882252 cri.go:83] list returned 8 containers
	I1228 07:04:02.764053  882252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:02.776746  882252 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:04:02.776774  882252 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:04:02.776824  882252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:04:02.788129  882252 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:04:02.789314  882252 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-129908" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.789957  882252 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-129908" cluster setting kubeconfig missing "default-k8s-diff-port-129908" context setting]
	I1228 07:04:02.793405  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.796509  882252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:04:02.810377  882252 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:04:02.810427  882252 kubeadm.go:602] duration metric: took 33.643458ms to restartPrimaryControlPlane
	I1228 07:04:02.810439  882252 kubeadm.go:403] duration metric: took 72.900379ms to StartCluster
	I1228 07:04:02.810463  882252 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.810543  882252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.814033  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.814425  882252 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:04:02.814668  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:02.814737  882252 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:04:02.814823  882252 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.814837  882252 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.814844  882252 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:04:02.814873  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815322  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.815673  882252 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815706  882252 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-129908"
	I1228 07:04:02.815709  882252 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815744  882252 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.815758  882252 addons.go:248] addon dashboard should already be in state true
	I1228 07:04:02.815802  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815974  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816174  882252 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.816207  882252 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.816231  882252 addons.go:248] addon metrics-server should already be in state true
	I1228 07:04:02.816264  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.816395  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816719  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.819436  882252 out.go:179] * Verifying Kubernetes components...
	I1228 07:04:02.823186  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:02.861166  882252 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:04:02.861203  882252 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:04:02.862499  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:04:02.862519  882252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:04:02.862547  882252 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:02.862563  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:04:02.862596  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.862620  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.863967  882252 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.863988  882252 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:04:02.864027  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.864484  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.872650  882252 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:04:02.877275  882252 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:03:58.217522  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:58.222483  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 07:03:58.223414  880223 api_server.go:141] control plane version: v1.35.0
	I1228 07:03:58.223442  880223 api_server.go:131] duration metric: took 506.434422ms to wait for apiserver health ...
	I1228 07:03:58.223451  880223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:03:58.227321  880223 system_pods.go:59] 9 kube-system pods found
	I1228 07:03:58.227348  880223 system_pods.go:61] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.227355  880223 system_pods.go:61] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.227362  880223 system_pods.go:61] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.227377  880223 system_pods.go:61] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.227387  880223 system_pods.go:61] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.227393  880223 system_pods.go:61] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.227400  880223 system_pods.go:61] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.227407  880223 system_pods.go:61] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.227416  880223 system_pods.go:61] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.227424  880223 system_pods.go:74] duration metric: took 3.965842ms to wait for pod list to return data ...
	I1228 07:03:58.227433  880223 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:03:58.229720  880223 default_sa.go:45] found service account: "default"
	I1228 07:03:58.229740  880223 default_sa.go:55] duration metric: took 2.300807ms for default service account to be created ...
	I1228 07:03:58.229747  880223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:03:58.299736  880223 system_pods.go:86] 9 kube-system pods found
	I1228 07:03:58.299772  880223 system_pods.go:89] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.299780  880223 system_pods.go:89] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.299787  880223 system_pods.go:89] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.299793  880223 system_pods.go:89] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.299798  880223 system_pods.go:89] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.299804  880223 system_pods.go:89] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.299809  880223 system_pods.go:89] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.299816  880223 system_pods.go:89] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.299823  880223 system_pods.go:89] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.299833  880223 system_pods.go:126] duration metric: took 70.080198ms to wait for k8s-apps to be running ...
	I1228 07:03:58.299847  880223 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:03:58.299903  880223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:03:58.316616  880223 system_svc.go:56] duration metric: took 16.755637ms WaitForService to wait for kubelet
	I1228 07:03:58.316644  880223 kubeadm.go:587] duration metric: took 3.40319134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:58.316662  880223 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:03:58.319428  880223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:03:58.319454  880223 node_conditions.go:123] node cpu capacity is 8
	I1228 07:03:58.319469  880223 node_conditions.go:105] duration metric: took 2.802451ms to run NodePressure ...
	I1228 07:03:58.319480  880223 start.go:242] waiting for startup goroutines ...
	I1228 07:03:58.319487  880223 start.go:247] waiting for cluster config update ...
	I1228 07:03:58.319498  880223 start.go:256] writing updated cluster config ...
	I1228 07:03:58.319774  880223 ssh_runner.go:195] Run: rm -f paused
	I1228 07:03:58.324556  880223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:03:58.327768  880223 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s8grm" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:00.333468  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:02.334146  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:03:58.931292  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:00.931470  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:02.938234  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:02.878697  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:04:02.878726  882252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:04:02.878799  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.894522  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.906490  882252 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:02.906522  882252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:04:02.906593  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.906643  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.913956  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.933719  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:03.007313  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:03.020758  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:04:03.020783  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:04:03.025468  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:03.025821  882252 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:03.029952  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:04:03.029974  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:04:03.039959  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:04:03.039983  882252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:04:03.048956  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:04:03.048979  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:04:03.049792  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:03.059613  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.059634  882252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:04:03.069470  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:04:03.069493  882252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:04:03.079512  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.090099  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:04:03.090125  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:04:03.109191  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:04:03.109228  882252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:04:03.127327  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:04:03.127354  882252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:04:03.145332  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:04:03.145362  882252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:04:03.161020  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:04:03.161041  882252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:04:03.175283  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:03.175302  882252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:04:03.190565  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:04.366811  882252 node_ready.go:49] node "default-k8s-diff-port-129908" is "Ready"
	I1228 07:04:04.366854  882252 node_ready.go:38] duration metric: took 1.340986184s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:04.366876  882252 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:04:04.366953  882252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:04:05.079411  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.05389931s)
	I1228 07:04:05.079504  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029690998s)
	I1228 07:04:05.079781  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.000232104s)
	I1228 07:04:05.079814  882252 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-129908"
	I1228 07:04:05.079947  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889324162s)
	I1228 07:04:05.080255  882252 api_server.go:72] duration metric: took 2.265784615s to wait for apiserver process to appear ...
	I1228 07:04:05.080277  882252 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:04:05.080339  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.082924  882252 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-129908 addons enable metrics-server
	
	I1228 07:04:05.086253  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:04:05.086281  882252 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:04:05.088197  882252 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1228 07:04:05.089519  882252 addons.go:530] duration metric: took 2.27478482s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1228 07:04:05.581379  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.586973  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1228 07:04:05.588678  882252 api_server.go:141] control plane version: v1.35.0
	I1228 07:04:05.588713  882252 api_server.go:131] duration metric: took 508.427311ms to wait for apiserver health ...
	I1228 07:04:05.588726  882252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:04:05.592639  882252 system_pods.go:59] 9 kube-system pods found
	I1228 07:04:05.592689  882252 system_pods.go:61] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.592702  882252 system_pods.go:61] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.592722  882252 system_pods.go:61] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.592730  882252 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.592740  882252 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.592750  882252 system_pods.go:61] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.592758  882252 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.592765  882252 system_pods.go:61] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.592772  882252 system_pods.go:61] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.592784  882252 system_pods.go:74] duration metric: took 4.051269ms to wait for pod list to return data ...
	I1228 07:04:05.592793  882252 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:04:05.595925  882252 default_sa.go:45] found service account: "default"
	I1228 07:04:05.595948  882252 default_sa.go:55] duration metric: took 3.147858ms for default service account to be created ...
	I1228 07:04:05.595959  882252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:04:05.601261  882252 system_pods.go:86] 9 kube-system pods found
	I1228 07:04:05.601357  882252 system_pods.go:89] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.601384  882252 system_pods.go:89] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.601428  882252 system_pods.go:89] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.601469  882252 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.601511  882252 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.601549  882252 system_pods.go:89] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.601594  882252 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.601633  882252 system_pods.go:89] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.601672  882252 system_pods.go:89] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.601685  882252 system_pods.go:126] duration metric: took 5.718923ms to wait for k8s-apps to be running ...
	I1228 07:04:05.601696  882252 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:04:05.601792  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:04:05.633925  882252 system_svc.go:56] duration metric: took 32.2184ms WaitForService to wait for kubelet
	I1228 07:04:05.633962  882252 kubeadm.go:587] duration metric: took 2.819493554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:04:05.633987  882252 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:04:05.639517  882252 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:04:05.639550  882252 node_conditions.go:123] node cpu capacity is 8
	I1228 07:04:05.639569  882252 node_conditions.go:105] duration metric: took 5.575875ms to run NodePressure ...
	I1228 07:04:05.639586  882252 start.go:242] waiting for startup goroutines ...
	I1228 07:04:05.639597  882252 start.go:247] waiting for cluster config update ...
	I1228 07:04:05.639614  882252 start.go:256] writing updated cluster config ...
	I1228 07:04:05.639915  882252 ssh_runner.go:195] Run: rm -f paused
	I1228 07:04:05.647014  882252 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:05.659239  882252 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mbfzh" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:02.704180  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.704962  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:07.203308  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.335906  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:06.878878  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:05.434776  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.931491  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.670978  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:10.165524  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:09.204178  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:11.703509  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:09.333196  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:11.333675  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:10.430946  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.431254  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.166364  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.665182  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.203171  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:16.203563  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:13.334174  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:15.833412  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:17.833543  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:14.931067  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:17.431207  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:16.665304  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:19.164406  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:18.204086  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:20.703420  873440 pod_ready.go:94] pod "coredns-7d764666f9-9n78x" is "Ready"
	I1228 07:04:20.703450  873440 pod_ready.go:86] duration metric: took 38.005418075s for pod "coredns-7d764666f9-9n78x" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.705734  873440 pod_ready.go:83] waiting for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.709107  873440 pod_ready.go:94] pod "etcd-no-preload-456925" is "Ready"
	I1228 07:04:20.709130  873440 pod_ready.go:86] duration metric: took 3.373198ms for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.711055  873440 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.714256  873440 pod_ready.go:94] pod "kube-apiserver-no-preload-456925" is "Ready"
	I1228 07:04:20.714278  873440 pod_ready.go:86] duration metric: took 3.20057ms for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.715898  873440 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.901759  873440 pod_ready.go:94] pod "kube-controller-manager-no-preload-456925" is "Ready"
	I1228 07:04:20.901785  873440 pod_ready.go:86] duration metric: took 185.864424ms for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.101880  873440 pod_ready.go:83] waiting for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.501912  873440 pod_ready.go:94] pod "kube-proxy-mn4cz" is "Ready"
	I1228 07:04:21.501939  873440 pod_ready.go:86] duration metric: took 400.033432ms for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.701807  873440 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102084  873440 pod_ready.go:94] pod "kube-scheduler-no-preload-456925" is "Ready"
	I1228 07:04:22.102117  873440 pod_ready.go:86] duration metric: took 400.282919ms for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102133  873440 pod_ready.go:40] duration metric: took 39.409345661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:22.149656  873440 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:04:22.151334  873440 out.go:179] * Done! kubectl is now configured to use "no-preload-456925" cluster and "default" namespace by default
	W1228 07:04:20.333502  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:22.336156  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:19.930823  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:21.932881  874073 pod_ready.go:94] pod "coredns-5dd5756b68-kcdsc" is "Ready"
	I1228 07:04:21.932918  874073 pod_ready.go:86] duration metric: took 37.007863966s for pod "coredns-5dd5756b68-kcdsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.935882  874073 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.939908  874073 pod_ready.go:94] pod "etcd-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.939936  874073 pod_ready.go:86] duration metric: took 4.02365ms for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.942466  874073 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.946100  874073 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.946121  874073 pod_ready.go:86] duration metric: took 3.628428ms for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.948541  874073 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.129399  874073 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-805353" is "Ready"
	I1228 07:04:22.129426  874073 pod_ready.go:86] duration metric: took 180.865961ms for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.330689  874073 pod_ready.go:83] waiting for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.729871  874073 pod_ready.go:94] pod "kube-proxy-sd5kh" is "Ready"
	I1228 07:04:22.729898  874073 pod_ready.go:86] duration metric: took 399.179627ms for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.929709  874073 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329353  874073 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-805353" is "Ready"
	I1228 07:04:23.329386  874073 pod_ready.go:86] duration metric: took 399.644333ms for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329402  874073 pod_ready.go:40] duration metric: took 38.409544453s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:23.376490  874073 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 07:04:23.378140  874073 out.go:203] 
	W1228 07:04:23.379347  874073 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:04:23.380351  874073 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:04:23.381580  874073 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-805353" cluster and "default" namespace by default
	W1228 07:04:21.665046  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:23.666729  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:24.833321  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.834192  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.164189  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:28.164450  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:30.164565  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:29.334446  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:31.833411  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:32.664464  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:34.665842  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:33.833511  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:36.335200  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	356a490713e31       6e38f40d628db       12 seconds ago       Running             storage-provisioner       2                   6147e4721220a       storage-provisioner                              kube-system
	be331eff4baba       07655ddf2eebe       37 seconds ago       Running             kubernetes-dashboard      0                   a7c34dad94951       kubernetes-dashboard-8694d4445c-wngnn            kubernetes-dashboard
	a9fa210a0227a       4921d7a6dffa9       54 seconds ago       Running             kindnet-cni               1                   1d8b64e039404       kindnet-qcscm                                    kube-system
	e82a7dad9f2f3       56cc512116c8f       54 seconds ago       Running             busybox                   1                   5b299920b59a8       busybox                                          default
	e7fac735482a5       ead0a4a53df89       54 seconds ago       Running             coredns                   1                   aa61c62e5d74f       coredns-5dd5756b68-kcdsc                         kube-system
	657662d35f27a       6e38f40d628db       54 seconds ago       Exited              storage-provisioner       1                   6147e4721220a       storage-provisioner                              kube-system
	906815baf4f48       ea1030da44aa1       54 seconds ago       Running             kube-proxy                1                   2450a4c1afd0d       kube-proxy-sd5kh                                 kube-system
	f843f61fe24b5       f6f496300a2ae       58 seconds ago       Running             kube-scheduler            1                   ccd6b7e92451f       kube-scheduler-old-k8s-version-805353            kube-system
	3b32102900823       4be79c38a4bab       58 seconds ago       Running             kube-controller-manager   1                   0399d14f4ad03       kube-controller-manager-old-k8s-version-805353   kube-system
	c8f0105c83da5       bb5e0dde9054c       58 seconds ago       Running             kube-apiserver            1                   2485cc50bbba6       kube-apiserver-old-k8s-version-805353            kube-system
	c1b3d72f69249       73deb9a3f7025       58 seconds ago       Running             etcd                      1                   0d199fd2871f2       etcd-old-k8s-version-805353                      kube-system
	19b35739eed7c       56cc512116c8f       About a minute ago   Exited              busybox                   0                   5cd53a4877108       busybox                                          default
	e66e9087b6077       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   45fac7f344922       coredns-5dd5756b68-kcdsc                         kube-system
	f46451de7fd5e       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   45f82d3229180       kindnet-qcscm                                    kube-system
	fd85b3f52f9a8       ea1030da44aa1       About a minute ago   Exited              kube-proxy                0                   40536d6fc5843       kube-proxy-sd5kh                                 kube-system
	c5aebed7be58d       4be79c38a4bab       2 minutes ago        Exited              kube-controller-manager   0                   164505068476d       kube-controller-manager-old-k8s-version-805353   kube-system
	ad8490b8ae89d       bb5e0dde9054c       2 minutes ago        Exited              kube-apiserver            0                   f52d48b8de9d3       kube-apiserver-old-k8s-version-805353            kube-system
	561f17c3d447d       73deb9a3f7025       2 minutes ago        Exited              etcd                      0                   db550e957d0e1       etcd-old-k8s-version-805353                      kube-system
	cd823b9643e46       f6f496300a2ae       2 minutes ago        Exited              kube-scheduler            0                   f146a7c8125b1       kube-scheduler-old-k8s-version-805353            kube-system
	
	
	==> containerd <==
	Dec 28 07:04:27 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:27.712142159Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.022392655Z" level=info msg="StopPodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.023048564Z" level=info msg="TearDown network for sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.023107495Z" level=info msg="StopPodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025446268Z" level=info msg="RemovePodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025495090Z" level=info msg="Forcibly stopping sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025991463Z" level=info msg="TearDown network for sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.028783730Z" level=info msg="Ensure that sandbox a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5 in task-service has been cleanup successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.033903017Z" level=info msg="RemovePodSandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.034885267Z" level=info msg="StopPodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.067440067Z" level=info msg="TearDown network for sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068006970Z" level=info msg="StopPodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068377411Z" level=info msg="RemovePodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068414134Z" level=info msg="Forcibly stopping sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.090857132Z" level=info msg="TearDown network for sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.093336854Z" level=info msg="Ensure that sandbox f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9 in task-service has been cleanup successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.096475429Z" level=info msg="RemovePodSandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.135591826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.320595015Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.374750946Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.374812586Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.375690079Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.414950351Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.416238367Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.416246430Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-805353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-805353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-805353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-805353
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:04:37 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-805353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                e2225a48-4058-4e27-bcb6-972f2816af01
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-kcdsc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-805353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-qcscm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-805353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-805353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-sd5kh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-805353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 metrics-server-57f55c9bc5-kwn5g                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         78s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5wlfr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wngnn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-805353 event: Registered Node old-k8s-version-805353 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-805353 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-805353 event: Registered Node old-k8s-version-805353 in Controller
	  Normal  Starting                 1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  1s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     1s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             1s                   kubelet          Node old-k8s-version-805353 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  1s                   kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:04:38 up  3:47,  0 user,  load average: 2.34, 2.93, 10.73
	Linux old-k8s-version-805353 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.016945    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d6aeb882-f645-4426-a4c9-e532ab8e57e7" podNamespace="kube-system" podName="coredns-5dd5756b68-kcdsc"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017051    2392 topology_manager.go:215] "Topology Admit Handler" podUID="53e9f353-a057-47be-8ab7-0544989c007f" podNamespace="kube-system" podName="kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017139    2392 topology_manager.go:215] "Topology Admit Handler" podUID="b63578b0-6b5a-40c8-a669-ee44a8351cc2" podNamespace="kube-system" podName="storage-provisioner"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017236    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d9d29e50-d42e-4587-a782-865a82530db0" podNamespace="default" podName="busybox"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017322    2392 topology_manager.go:215] "Topology Admit Handler" podUID="61fd49e7-f7e7-4a20-9615-057de964bc02" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-kwn5g"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017399    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d601191e-5f70-4d10-9a9b-58c41c86d1d4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-wngnn"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017469    2392 topology_manager.go:215] "Topology Admit Handler" podUID="2c6d7704-ddd1-42a1-95f5-23d0199d28e3" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-5wlfr"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.023297    2392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.044955    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-cni-cfg\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045078    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a3c98b1-07c1-4039-ab3b-16af3801cac8-lib-modules\") pod \"kube-proxy-sd5kh\" (UID: \"7a3c98b1-07c1-4039-ab3b-16af3801cac8\") " pod="kube-system/kube-proxy-sd5kh"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045127    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-xtables-lock\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045158    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-lib-modules\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045726    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b63578b0-6b5a-40c8-a669-ee44a8351cc2-tmp\") pod \"storage-provisioner\" (UID: \"b63578b0-6b5a-40c8-a669-ee44a8351cc2\") " pod="kube-system/storage-provisioner"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045911    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a3c98b1-07c1-4039-ab3b-16af3801cac8-xtables-lock\") pod \"kube-proxy-sd5kh\" (UID: \"7a3c98b1-07c1-4039-ab3b-16af3801cac8\") " pod="kube-system/kube-proxy-sd5kh"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.124772    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-805353\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.125266    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-old-k8s-version-805353\" already exists" pod="kube-system/kube-apiserver-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.125544    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-old-k8s-version-805353\" already exists" pod="kube-system/etcd-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375145    2392 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375208    2392 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375537    2392 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q24p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-5wlfr_kubernetes-dashboard(2c6d7704-ddd1-42a1-95f5-23d0199d28e3): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image "registry.k8s.io/echoserver:1.4": not implemented: media type "application/vnd.docker.distribution.manifest.v1+prettyjws" is no longer supported since containerd v2.1, please rebuild the image as "application/vnd.docker.distribution.ma
nifest.v2+json" or "application/vnd.oci.image.manifest.v1+json"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375654    2392 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5wlfr" podUID="2c6d7704-ddd1-42a1-95f5-23d0199d28e3"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416640    2392 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416702    2392 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416872    2392 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7xn9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-kwn5g_kube-system(61fd49e7-f7e7-4a20-9615-057de964bc02): ErrImagePull: failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416969    2392 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-kwn5g" podUID="61fd49e7-f7e7-4a20-9615-057de964bc02"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805353 -n old-k8s-version-805353
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-805353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr: exit status 1 (70.337678ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kwn5g" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-5wlfr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-805353
helpers_test.go:244: (dbg) docker inspect old-k8s-version-805353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9",
	        "Created": "2025-12-28T07:02:25.634018546Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 874370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:33.52297728Z",
	            "FinishedAt": "2025-12-28T07:03:32.540399369Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/hosts",
	        "LogPath": "/var/lib/docker/containers/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9/c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9-json.log",
	        "Name": "/old-k8s-version-805353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-805353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-805353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c25d9621e8a13ec4500bf128d4892c0aa1e846dc13ee0e08c11402124111b6d9",
	                "LowerDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/merged",
	                "UpperDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/diff",
	                "WorkDir": "/var/lib/docker/overlay2/348e30cc16783e6a24603dac02d7a170a1720e1aa6d57621b0b3d7121b58c167/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-805353",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-805353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-805353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-805353",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-805353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c8cbac74bc625ae94cf15632d144f827e91ea376b8aca964d3a55637e6c3c255",
	            "SandboxKey": "/var/run/docker/netns/c8cbac74bc62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-805353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e34afc1724f9ec05151f68e362a6a5ad479b128bd03dbc2c9ee16903f92b971d",
	                    "EndpointID": "9df85c6d98082c54bf3a28ae3f5e1a8b968d7df9a34d9c06fec542e99a2333f3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:75:cf:6f:aa:ba",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-805353",
	                        "c25d9621e8a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805353 -n old-k8s-version-805353
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-805353 logs -n 25
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p disable-driver-mounts-284795                                                                                                                                                                                                                     │ disable-driver-mounts-284795 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:02 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:02 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-456925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p no-preload-456925 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-805353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p old-k8s-version-805353 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p no-preload-456925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-805353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-982151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p embed-certs-982151 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151           │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908 │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │                     │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353       │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:03:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:03:55.738532  882252 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:03:55.738689  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.738703  882252 out.go:374] Setting ErrFile to fd 2...
	I1228 07:03:55.738721  882252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:03:55.739259  882252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:03:55.740037  882252 out.go:368] Setting JSON to false
	I1228 07:03:55.742377  882252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13580,"bootTime":1766891856,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:03:55.742479  882252 start.go:143] virtualization: kvm guest
	I1228 07:03:55.744573  882252 out.go:179] * [default-k8s-diff-port-129908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:03:55.745969  882252 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:03:55.746036  882252 notify.go:221] Checking for updates...
	I1228 07:03:55.749018  882252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:03:55.750368  882252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:03:55.751423  882252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:03:55.752505  882252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:03:55.753752  882252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:03:55.755380  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:03:55.756046  882252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:03:55.782846  882252 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:03:55.782996  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.852117  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.840745048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.852300  882252 docker.go:319] overlay module found
	I1228 07:03:55.853848  882252 out.go:179] * Using the docker driver based on existing profile
	I1228 07:03:55.855042  882252 start.go:309] selected driver: docker
	I1228 07:03:55.855063  882252 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.855165  882252 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:03:55.855840  882252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:03:55.917473  882252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 07:03:55.906550203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:03:55.917793  882252 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:55.917933  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:03:55.918027  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:03:55.918113  882252 start.go:353] cluster config:
	{Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:03:55.919833  882252 out.go:179] * Starting "default-k8s-diff-port-129908" primary control-plane node in "default-k8s-diff-port-129908" cluster
	I1228 07:03:55.920969  882252 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:03:55.922122  882252 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:03:55.923232  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:03:55.923274  882252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:03:55.923284  882252 cache.go:65] Caching tarball of preloaded images
	I1228 07:03:55.923341  882252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:03:55.923383  882252 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:03:55.923396  882252 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:03:55.923509  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:55.945420  882252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:03:55.945450  882252 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:03:55.945480  882252 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:03:55.945524  882252 start.go:360] acquireMachinesLock for default-k8s-diff-port-129908: {Name:mk66a28d31a5a7f03f0abd1dfec44af622c036e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:03:55.945595  882252 start.go:364] duration metric: took 45.236µs to acquireMachinesLock for "default-k8s-diff-port-129908"
	I1228 07:03:55.945619  882252 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:03:55.945629  882252 fix.go:54] fixHost starting: 
	I1228 07:03:55.945869  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:55.966941  882252 fix.go:112] recreateIfNeeded on default-k8s-diff-port-129908: state=Stopped err=<nil>
	W1228 07:03:55.966987  882252 fix.go:138] unexpected machine state, will restart: <nil>
	W1228 07:03:53.203811  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:03:55.206598  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:03:54.958080  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:03:54.958206  880223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:03:54.958289  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.961176  880223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:54.961312  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:03:54.961373  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:54.988639  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:54.999061  880223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:54.999103  880223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:03:54.999305  880223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-982151
	I1228 07:03:55.003431  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.007935  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.029653  880223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/embed-certs-982151/id_rsa Username:docker}
	I1228 07:03:55.081852  880223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:03:55.098405  880223 node_ready.go:35] waiting up to 6m0s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:55.112422  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:03:55.112450  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:03:55.121850  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:55.121890  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:03:55.121900  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:03:55.136569  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:03:55.136606  880223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:03:55.143729  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.147425  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:03:55.147451  880223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:03:55.172581  880223 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.172615  880223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:03:55.176850  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:03:55.176888  880223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:03:55.210186  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:03:55.210579  880223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:03:55.215104  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:03:55.244553  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:03:55.244590  880223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:03:55.261806  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.261878  880223 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1228 07:03:55.261947  880223 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1228 07:03:55.294458  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:03:55.294486  880223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:03:55.326344  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:03:55.326372  880223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:03:55.346621  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:03:55.346647  880223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:03:55.375997  880223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.376020  880223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:03:55.391494  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:03:55.441974  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:03:55.603856  880223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:03:57.038487  880223 node_ready.go:49] node "embed-certs-982151" is "Ready"
	I1228 07:03:57.038523  880223 node_ready.go:38] duration metric: took 1.940079394s for node "embed-certs-982151" to be "Ready" ...
	I1228 07:03:57.038543  880223 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:03:57.038606  880223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:03:57.704736  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489588386s)
	I1228 07:03:57.704780  880223 addons.go:495] Verifying addon metrics-server=true in "embed-certs-982151"
	I1228 07:03:57.704836  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.313298327s)
	I1228 07:03:57.705109  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.263104063s)
	I1228 07:03:57.707172  880223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-982151 addons enable metrics-server
	
	I1228 07:03:57.716966  880223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11307566s)
	I1228 07:03:57.716984  880223 api_server.go:72] duration metric: took 2.803527403s to wait for apiserver process to appear ...
	I1228 07:03:57.717001  880223 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:03:57.717019  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:57.719801  880223 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1228 07:03:57.721544  880223 addons.go:530] duration metric: took 2.808027569s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1228 07:03:57.721710  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:57.721773  880223 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:03:53.930573  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:03:56.432760  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:03:55.969541  882252 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-129908" ...
	I1228 07:03:55.969643  882252 cli_runner.go:164] Run: docker start default-k8s-diff-port-129908
	I1228 07:03:56.285009  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:03:56.317632  882252 kic.go:430] container "default-k8s-diff-port-129908" state is running.
	I1228 07:03:56.318755  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:56.344384  882252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/config.json ...
	I1228 07:03:56.344656  882252 machine.go:94] provisionDockerMachine start ...
	I1228 07:03:56.344759  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:56.367745  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:56.368021  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:56.368034  882252 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:03:56.368796  882252 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41308->127.0.0.1:33133: read: connection reset by peer
	I1228 07:03:59.512247  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.512276  882252 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-129908"
	I1228 07:03:59.512350  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.534401  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.534744  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.534766  882252 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-129908 && echo "default-k8s-diff-port-129908" | sudo tee /etc/hostname
	I1228 07:03:59.684180  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-129908
	
	I1228 07:03:59.684288  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.705307  882252 main.go:144] libmachine: Using SSH client type: native
	I1228 07:03:59.705585  882252 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1228 07:03:59.705613  882252 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-129908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-129908/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-129908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:03:59.844260  882252 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:03:59.844297  882252 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:03:59.844323  882252 ubuntu.go:190] setting up certificates
	I1228 07:03:59.844346  882252 provision.go:84] configureAuth start
	I1228 07:03:59.844416  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:03:59.865173  882252 provision.go:143] copyHostCerts
	I1228 07:03:59.865247  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:03:59.865261  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:03:59.865342  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:03:59.865484  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:03:59.865498  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:03:59.865539  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:03:59.865612  882252 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:03:59.865623  882252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:03:59.865658  882252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:03:59.865731  882252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-129908 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-129908 localhost minikube]
	I1228 07:03:59.897890  882252 provision.go:177] copyRemoteCerts
	I1228 07:03:59.897972  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:03:59.898024  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:03:59.918735  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.019603  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:04:00.042819  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:04:00.064302  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:04:00.085628  882252 provision.go:87] duration metric: took 241.249279ms to configureAuth
	I1228 07:04:00.085661  882252 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:04:00.085909  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:00.085931  882252 machine.go:97] duration metric: took 3.741255863s to provisionDockerMachine
	I1228 07:04:00.085943  882252 start.go:293] postStartSetup for "default-k8s-diff-port-129908" (driver="docker")
	I1228 07:04:00.085955  882252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:04:00.086021  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:04:00.086092  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.107294  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.213532  882252 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:04:00.217985  882252 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:04:00.218016  882252 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:04:00.218030  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:04:00.218175  882252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:04:00.218332  882252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:04:00.218449  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:04:00.227795  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:00.251291  882252 start.go:296] duration metric: took 165.331058ms for postStartSetup
	I1228 07:04:00.251386  882252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:04:00.251464  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.276621  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.373799  882252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:04:00.379539  882252 fix.go:56] duration metric: took 4.433903055s for fixHost
	I1228 07:04:00.379578  882252 start.go:83] releasing machines lock for "default-k8s-diff-port-129908", held for 4.433956892s
	I1228 07:04:00.379650  882252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-129908
	I1228 07:04:00.401020  882252 ssh_runner.go:195] Run: cat /version.json
	I1228 07:04:00.401076  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.401098  882252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:04:00.401197  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:00.423791  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.424146  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:00.518927  882252 ssh_runner.go:195] Run: systemctl --version
	I1228 07:04:00.588110  882252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:04:00.594268  882252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:04:00.594347  882252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:04:00.604690  882252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:04:00.604711  882252 start.go:496] detecting cgroup driver to use...
	I1228 07:04:00.604747  882252 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:04:00.604794  882252 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:04:00.626916  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:04:00.642927  882252 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:04:00.643006  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:04:00.661151  882252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:04:00.677071  882252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:04:00.782881  882252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:04:00.886938  882252 docker.go:234] disabling docker service ...
	I1228 07:04:00.887034  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:04:00.905648  882252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:04:00.922255  882252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:04:01.032767  882252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:04:01.151567  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:04:01.167546  882252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:04:01.184683  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:04:01.195723  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:04:01.206605  882252 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:04:01.206685  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:04:01.217347  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.227406  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:04:01.238026  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:04:01.252602  882252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:04:01.262955  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:04:01.274767  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:04:01.285746  882252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:04:01.296288  882252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:04:01.305400  882252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:04:01.314805  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.409989  882252 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:04:01.571109  882252 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:04:01.571191  882252 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:04:01.575916  882252 start.go:574] Will wait 60s for crictl version
	I1228 07:04:01.575986  882252 ssh_runner.go:195] Run: which crictl
	I1228 07:04:01.580123  882252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:04:01.609855  882252 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:04:01.609941  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.636563  882252 ssh_runner.go:195] Run: containerd --version
	I1228 07:04:01.664300  882252 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	W1228 07:03:57.704861  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:00.204893  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:01.666266  882252 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-129908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:04:01.685605  882252 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:04:01.690424  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.702737  882252 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:01.702926  882252 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:01.702997  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.733786  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.733821  882252 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:01.733892  882252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:01.763427  882252 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:01.763453  882252 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:01.763463  882252 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:04:01.763630  882252 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-129908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:01.763699  882252 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:01.793899  882252 cni.go:84] Creating CNI manager for ""
	I1228 07:04:01.793927  882252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:01.793950  882252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:04:01.793979  882252 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-129908 NodeName:default-k8s-diff-port-129908 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:01.794135  882252 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-129908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:01.794234  882252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:01.803826  882252 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:01.803923  882252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:01.814997  882252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:04:01.830520  882252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:01.847094  882252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1228 07:04:01.862575  882252 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:01.867240  882252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:01.879762  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:01.981753  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:02.008768  882252 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908 for IP: 192.168.85.2
	I1228 07:04:02.008791  882252 certs.go:195] generating shared ca certs ...
	I1228 07:04:02.008811  882252 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.008980  882252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:02.009054  882252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:02.009079  882252 certs.go:257] generating profile certs ...
	I1228 07:04:02.009241  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/client.key
	I1228 07:04:02.009336  882252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key.e6891321
	I1228 07:04:02.009417  882252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key
	I1228 07:04:02.009566  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:02.009614  882252 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:02.009629  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:02.009669  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:02.009721  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:02.009751  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:02.009804  882252 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:02.010516  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:02.030969  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:02.050983  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:02.071754  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:02.095835  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:04:02.119207  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:04:02.138541  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:02.156606  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/default-k8s-diff-port-129908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:04:02.175252  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:02.193782  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:02.213892  882252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:02.232418  882252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:02.246379  882252 ssh_runner.go:195] Run: openssl version
	I1228 07:04:02.253696  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.261337  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:02.268782  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272460  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.272502  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:02.309757  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:02.318273  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.327328  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:02.336776  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341018  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.341065  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:02.375936  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:02.383700  882252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.391577  882252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:02.399167  882252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.402932  882252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.403000  882252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:02.441765  882252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:02.450114  882252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:02.454467  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:04:02.490848  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:04:02.528538  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:04:02.574210  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:04:02.634863  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:04:02.688071  882252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:04:02.737549  882252 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-129908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-129908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:02.737745  882252 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:04:02.763979  882252 cri.go:83] list returned 8 containers
	I1228 07:04:02.764053  882252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:02.776746  882252 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:04:02.776774  882252 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:04:02.776824  882252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:04:02.788129  882252 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:04:02.789314  882252 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-129908" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.789957  882252 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-129908" cluster setting kubeconfig missing "default-k8s-diff-port-129908" context setting]
	I1228 07:04:02.793405  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.796509  882252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:04:02.810377  882252 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:04:02.810427  882252 kubeadm.go:602] duration metric: took 33.643458ms to restartPrimaryControlPlane
	I1228 07:04:02.810439  882252 kubeadm.go:403] duration metric: took 72.900379ms to StartCluster
	I1228 07:04:02.810463  882252 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.810543  882252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:02.814033  882252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:02.814425  882252 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:04:02.814668  882252 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:02.814737  882252 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:04:02.814823  882252 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.814837  882252 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.814844  882252 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:04:02.814873  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815322  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.815673  882252 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815706  882252 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-129908"
	I1228 07:04:02.815709  882252 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.815744  882252 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.815758  882252 addons.go:248] addon dashboard should already be in state true
	I1228 07:04:02.815802  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.815974  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816174  882252 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-129908"
	I1228 07:04:02.816207  882252 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.816231  882252 addons.go:248] addon metrics-server should already be in state true
	I1228 07:04:02.816264  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.816395  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.816719  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.819436  882252 out.go:179] * Verifying Kubernetes components...
	I1228 07:04:02.823186  882252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:02.861166  882252 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:04:02.861203  882252 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:04:02.862499  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:04:02.862519  882252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:04:02.862547  882252 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:02.862563  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:04:02.862596  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.862620  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.863967  882252 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-129908"
	W1228 07:04:02.863988  882252 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:04:02.864027  882252 host.go:66] Checking if "default-k8s-diff-port-129908" exists ...
	I1228 07:04:02.864484  882252 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-129908 --format={{.State.Status}}
	I1228 07:04:02.872650  882252 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:04:02.877275  882252 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:03:58.217522  880223 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 07:03:58.222483  880223 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 07:03:58.223414  880223 api_server.go:141] control plane version: v1.35.0
	I1228 07:03:58.223442  880223 api_server.go:131] duration metric: took 506.434422ms to wait for apiserver health ...
	I1228 07:03:58.223451  880223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:03:58.227321  880223 system_pods.go:59] 9 kube-system pods found
	I1228 07:03:58.227348  880223 system_pods.go:61] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.227355  880223 system_pods.go:61] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.227362  880223 system_pods.go:61] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.227377  880223 system_pods.go:61] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.227387  880223 system_pods.go:61] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.227393  880223 system_pods.go:61] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.227400  880223 system_pods.go:61] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.227407  880223 system_pods.go:61] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.227416  880223 system_pods.go:61] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.227424  880223 system_pods.go:74] duration metric: took 3.965842ms to wait for pod list to return data ...
	I1228 07:03:58.227433  880223 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:03:58.229720  880223 default_sa.go:45] found service account: "default"
	I1228 07:03:58.229740  880223 default_sa.go:55] duration metric: took 2.300807ms for default service account to be created ...
	I1228 07:03:58.229747  880223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:03:58.299736  880223 system_pods.go:86] 9 kube-system pods found
	I1228 07:03:58.299772  880223 system_pods.go:89] "coredns-7d764666f9-s8grm" [2e6d093f-2877-4940-9c33-86d69096da0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:03:58.299780  880223 system_pods.go:89] "etcd-embed-certs-982151" [c669a8a2-61c2-49c4-9fdc-53dc118bdfc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:03:58.299787  880223 system_pods.go:89] "kindnet-fchxm" [831cdd63-4a6a-4639-823d-4474a33c5a36] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:03:58.299793  880223 system_pods.go:89] "kube-apiserver-embed-certs-982151" [a909bb5d-6c6d-4bc6-af01-10c1dd644033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:03:58.299798  880223 system_pods.go:89] "kube-controller-manager-embed-certs-982151" [d6b9118f-9c7a-4f5c-ac1c-5dff86296027] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:03:58.299804  880223 system_pods.go:89] "kube-proxy-z29fh" [18a1ba5f-99d6-468e-aa14-5e347d2894a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:03:58.299809  880223 system_pods.go:89] "kube-scheduler-embed-certs-982151" [5a53225f-f1b0-497e-8376-d7c0c3336b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:03:58.299816  880223 system_pods.go:89] "metrics-server-5d785b57d4-xsks7" [81e3f749-eed9-432c-89e9-f4548e1b7e3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:03:58.299823  880223 system_pods.go:89] "storage-provisioner" [098bf558-141e-4b8d-a1e3-aaae9afedb15] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:03:58.299833  880223 system_pods.go:126] duration metric: took 70.080198ms to wait for k8s-apps to be running ...
	I1228 07:03:58.299847  880223 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:03:58.299903  880223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:03:58.316616  880223 system_svc.go:56] duration metric: took 16.755637ms WaitForService to wait for kubelet
	I1228 07:03:58.316644  880223 kubeadm.go:587] duration metric: took 3.40319134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:03:58.316662  880223 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:03:58.319428  880223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:03:58.319454  880223 node_conditions.go:123] node cpu capacity is 8
	I1228 07:03:58.319469  880223 node_conditions.go:105] duration metric: took 2.802451ms to run NodePressure ...
	I1228 07:03:58.319480  880223 start.go:242] waiting for startup goroutines ...
	I1228 07:03:58.319487  880223 start.go:247] waiting for cluster config update ...
	I1228 07:03:58.319498  880223 start.go:256] writing updated cluster config ...
	I1228 07:03:58.319774  880223 ssh_runner.go:195] Run: rm -f paused
	I1228 07:03:58.324556  880223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:03:58.327768  880223 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s8grm" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:00.333468  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:02.334146  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:03:58.931292  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:00.931470  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:02.938234  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:02.878697  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:04:02.878726  882252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:04:02.878799  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.894522  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.906490  882252 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:02.906522  882252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:04:02.906593  882252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-129908
	I1228 07:04:02.906643  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.913956  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:02.933719  882252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/default-k8s-diff-port-129908/id_rsa Username:docker}
	I1228 07:04:03.007313  882252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:03.020758  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:04:03.020783  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:04:03.025468  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:04:03.025821  882252 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:03.029952  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:04:03.029974  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:04:03.039959  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:04:03.039983  882252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:04:03.048956  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:04:03.048979  882252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:04:03.049792  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:04:03.059613  882252 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.059634  882252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:04:03.069470  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:04:03.069493  882252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:04:03.079512  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:04:03.090099  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:04:03.090125  882252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:04:03.109191  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:04:03.109228  882252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:04:03.127327  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:04:03.127354  882252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:04:03.145332  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:04:03.145362  882252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:04:03.161020  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:04:03.161041  882252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:04:03.175283  882252 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:03.175302  882252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:04:03.190565  882252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:04:04.366811  882252 node_ready.go:49] node "default-k8s-diff-port-129908" is "Ready"
	I1228 07:04:04.366854  882252 node_ready.go:38] duration metric: took 1.340986184s for node "default-k8s-diff-port-129908" to be "Ready" ...
	I1228 07:04:04.366876  882252 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:04:04.366953  882252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:04:05.079411  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.05389931s)
	I1228 07:04:05.079504  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029690998s)
	I1228 07:04:05.079781  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.000232104s)
	I1228 07:04:05.079814  882252 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-129908"
	I1228 07:04:05.079947  882252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889324162s)
	I1228 07:04:05.080255  882252 api_server.go:72] duration metric: took 2.265784615s to wait for apiserver process to appear ...
	I1228 07:04:05.080277  882252 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:04:05.080339  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.082924  882252 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-129908 addons enable metrics-server
	
	I1228 07:04:05.086253  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:04:05.086281  882252 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:04:05.088197  882252 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1228 07:04:05.089519  882252 addons.go:530] duration metric: took 2.27478482s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1228 07:04:05.581379  882252 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:04:05.586973  882252 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1228 07:04:05.588678  882252 api_server.go:141] control plane version: v1.35.0
	I1228 07:04:05.588713  882252 api_server.go:131] duration metric: took 508.427311ms to wait for apiserver health ...
	I1228 07:04:05.588726  882252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:04:05.592639  882252 system_pods.go:59] 9 kube-system pods found
	I1228 07:04:05.592689  882252 system_pods.go:61] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.592702  882252 system_pods.go:61] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.592722  882252 system_pods.go:61] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.592730  882252 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.592740  882252 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.592750  882252 system_pods.go:61] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.592758  882252 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.592765  882252 system_pods.go:61] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.592772  882252 system_pods.go:61] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.592784  882252 system_pods.go:74] duration metric: took 4.051269ms to wait for pod list to return data ...
	I1228 07:04:05.592793  882252 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:04:05.595925  882252 default_sa.go:45] found service account: "default"
	I1228 07:04:05.595948  882252 default_sa.go:55] duration metric: took 3.147858ms for default service account to be created ...
	I1228 07:04:05.595959  882252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:04:05.601261  882252 system_pods.go:86] 9 kube-system pods found
	I1228 07:04:05.601357  882252 system_pods.go:89] "coredns-7d764666f9-mbfzh" [30aa2ad5-7a16-4fce-8347-17630085bc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:04:05.601384  882252 system_pods.go:89] "etcd-default-k8s-diff-port-129908" [7c64ebeb-cc67-4b17-a9c2-765ad4d97798] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:04:05.601428  882252 system_pods.go:89] "kindnet-x5db5" [fb316251-22e1-43e0-b728-1f470d02cacb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:04:05.601469  882252 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-129908" [05583a6f-194c-42d1-a532-f3a5765c51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:04:05.601511  882252 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-129908" [da0ccaf2-767f-4e33-8a8c-d79d87864368] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:04:05.601549  882252 system_pods.go:89] "kube-proxy-rzg9h" [08ba573c-03a0-4a1c-acf1-f34fce8de5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:04:05.601594  882252 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-129908" [869d96de-6369-43cd-89b8-96bc40f07bff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:04:05.601633  882252 system_pods.go:89] "metrics-server-5d785b57d4-6h7tv" [3f1c607c-1aeb-4fc7-9865-8b161c339288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:04:05.601672  882252 system_pods.go:89] "storage-provisioner" [f41b8f35-c7cc-4b8e-ae42-81904d11bcd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:04:05.601685  882252 system_pods.go:126] duration metric: took 5.718923ms to wait for k8s-apps to be running ...
	I1228 07:04:05.601696  882252 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:04:05.601792  882252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:04:05.633925  882252 system_svc.go:56] duration metric: took 32.2184ms WaitForService to wait for kubelet
	I1228 07:04:05.633962  882252 kubeadm.go:587] duration metric: took 2.819493554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:04:05.633987  882252 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:04:05.639517  882252 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:04:05.639550  882252 node_conditions.go:123] node cpu capacity is 8
	I1228 07:04:05.639569  882252 node_conditions.go:105] duration metric: took 5.575875ms to run NodePressure ...
	I1228 07:04:05.639586  882252 start.go:242] waiting for startup goroutines ...
	I1228 07:04:05.639597  882252 start.go:247] waiting for cluster config update ...
	I1228 07:04:05.639614  882252 start.go:256] writing updated cluster config ...
	I1228 07:04:05.639915  882252 ssh_runner.go:195] Run: rm -f paused
	I1228 07:04:05.647014  882252 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:05.659239  882252 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mbfzh" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:04:02.704180  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.704962  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:07.203308  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:04.335906  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:06.878878  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:05.434776  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.931491  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:07.670978  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:10.165524  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:09.204178  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:11.703509  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:09.333196  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:11.333675  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:10.430946  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.431254  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:12.166364  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.665182  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:14.203171  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:16.203563  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	W1228 07:04:13.334174  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:15.833412  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:17.833543  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:14.931067  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:17.431207  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	W1228 07:04:16.665304  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:19.164406  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:18.204086  873440 pod_ready.go:104] pod "coredns-7d764666f9-9n78x" is not "Ready", error: <nil>
	I1228 07:04:20.703420  873440 pod_ready.go:94] pod "coredns-7d764666f9-9n78x" is "Ready"
	I1228 07:04:20.703450  873440 pod_ready.go:86] duration metric: took 38.005418075s for pod "coredns-7d764666f9-9n78x" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.705734  873440 pod_ready.go:83] waiting for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.709107  873440 pod_ready.go:94] pod "etcd-no-preload-456925" is "Ready"
	I1228 07:04:20.709130  873440 pod_ready.go:86] duration metric: took 3.373198ms for pod "etcd-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.711055  873440 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.714256  873440 pod_ready.go:94] pod "kube-apiserver-no-preload-456925" is "Ready"
	I1228 07:04:20.714278  873440 pod_ready.go:86] duration metric: took 3.20057ms for pod "kube-apiserver-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.715898  873440 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:20.901759  873440 pod_ready.go:94] pod "kube-controller-manager-no-preload-456925" is "Ready"
	I1228 07:04:20.901785  873440 pod_ready.go:86] duration metric: took 185.864424ms for pod "kube-controller-manager-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.101880  873440 pod_ready.go:83] waiting for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.501912  873440 pod_ready.go:94] pod "kube-proxy-mn4cz" is "Ready"
	I1228 07:04:21.501939  873440 pod_ready.go:86] duration metric: took 400.033432ms for pod "kube-proxy-mn4cz" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.701807  873440 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102084  873440 pod_ready.go:94] pod "kube-scheduler-no-preload-456925" is "Ready"
	I1228 07:04:22.102117  873440 pod_ready.go:86] duration metric: took 400.282919ms for pod "kube-scheduler-no-preload-456925" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.102133  873440 pod_ready.go:40] duration metric: took 39.409345661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:22.149656  873440 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:04:22.151334  873440 out.go:179] * Done! kubectl is now configured to use "no-preload-456925" cluster and "default" namespace by default
	W1228 07:04:20.333502  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:22.336156  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:19.930823  874073 pod_ready.go:104] pod "coredns-5dd5756b68-kcdsc" is not "Ready", error: <nil>
	I1228 07:04:21.932881  874073 pod_ready.go:94] pod "coredns-5dd5756b68-kcdsc" is "Ready"
	I1228 07:04:21.932918  874073 pod_ready.go:86] duration metric: took 37.007863966s for pod "coredns-5dd5756b68-kcdsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.935882  874073 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.939908  874073 pod_ready.go:94] pod "etcd-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.939936  874073 pod_ready.go:86] duration metric: took 4.02365ms for pod "etcd-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.942466  874073 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.946100  874073 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-805353" is "Ready"
	I1228 07:04:21.946121  874073 pod_ready.go:86] duration metric: took 3.628428ms for pod "kube-apiserver-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:21.948541  874073 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.129399  874073 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-805353" is "Ready"
	I1228 07:04:22.129426  874073 pod_ready.go:86] duration metric: took 180.865961ms for pod "kube-controller-manager-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.330689  874073 pod_ready.go:83] waiting for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.729871  874073 pod_ready.go:94] pod "kube-proxy-sd5kh" is "Ready"
	I1228 07:04:22.729898  874073 pod_ready.go:86] duration metric: took 399.179627ms for pod "kube-proxy-sd5kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:22.929709  874073 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329353  874073 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-805353" is "Ready"
	I1228 07:04:23.329386  874073 pod_ready.go:86] duration metric: took 399.644333ms for pod "kube-scheduler-old-k8s-version-805353" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:04:23.329402  874073 pod_ready.go:40] duration metric: took 38.409544453s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:04:23.376490  874073 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 07:04:23.378140  874073 out.go:203] 
	W1228 07:04:23.379347  874073 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:04:23.380351  874073 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:04:23.381580  874073 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-805353" cluster and "default" namespace by default
	W1228 07:04:21.665046  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:23.666729  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:24.833321  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.834192  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:26.164189  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:28.164450  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:30.164565  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:29.334446  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:31.833411  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:32.664464  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:34.665842  882252 pod_ready.go:104] pod "coredns-7d764666f9-mbfzh" is not "Ready", error: <nil>
	W1228 07:04:33.833511  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	W1228 07:04:36.335200  880223 pod_ready.go:104] pod "coredns-7d764666f9-s8grm" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	356a490713e31       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   6147e4721220a       storage-provisioner                              kube-system
	be331eff4baba       07655ddf2eebe       39 seconds ago       Running             kubernetes-dashboard      0                   a7c34dad94951       kubernetes-dashboard-8694d4445c-wngnn            kubernetes-dashboard
	a9fa210a0227a       4921d7a6dffa9       56 seconds ago       Running             kindnet-cni               1                   1d8b64e039404       kindnet-qcscm                                    kube-system
	e82a7dad9f2f3       56cc512116c8f       56 seconds ago       Running             busybox                   1                   5b299920b59a8       busybox                                          default
	e7fac735482a5       ead0a4a53df89       56 seconds ago       Running             coredns                   1                   aa61c62e5d74f       coredns-5dd5756b68-kcdsc                         kube-system
	657662d35f27a       6e38f40d628db       56 seconds ago       Exited              storage-provisioner       1                   6147e4721220a       storage-provisioner                              kube-system
	906815baf4f48       ea1030da44aa1       56 seconds ago       Running             kube-proxy                1                   2450a4c1afd0d       kube-proxy-sd5kh                                 kube-system
	f843f61fe24b5       f6f496300a2ae       About a minute ago   Running             kube-scheduler            1                   ccd6b7e92451f       kube-scheduler-old-k8s-version-805353            kube-system
	3b32102900823       4be79c38a4bab       About a minute ago   Running             kube-controller-manager   1                   0399d14f4ad03       kube-controller-manager-old-k8s-version-805353   kube-system
	c8f0105c83da5       bb5e0dde9054c       About a minute ago   Running             kube-apiserver            1                   2485cc50bbba6       kube-apiserver-old-k8s-version-805353            kube-system
	c1b3d72f69249       73deb9a3f7025       About a minute ago   Running             etcd                      1                   0d199fd2871f2       etcd-old-k8s-version-805353                      kube-system
	19b35739eed7c       56cc512116c8f       About a minute ago   Exited              busybox                   0                   5cd53a4877108       busybox                                          default
	e66e9087b6077       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   45fac7f344922       coredns-5dd5756b68-kcdsc                         kube-system
	f46451de7fd5e       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   45f82d3229180       kindnet-qcscm                                    kube-system
	fd85b3f52f9a8       ea1030da44aa1       About a minute ago   Exited              kube-proxy                0                   40536d6fc5843       kube-proxy-sd5kh                                 kube-system
	c5aebed7be58d       4be79c38a4bab       2 minutes ago        Exited              kube-controller-manager   0                   164505068476d       kube-controller-manager-old-k8s-version-805353   kube-system
	ad8490b8ae89d       bb5e0dde9054c       2 minutes ago        Exited              kube-apiserver            0                   f52d48b8de9d3       kube-apiserver-old-k8s-version-805353            kube-system
	561f17c3d447d       73deb9a3f7025       2 minutes ago        Exited              etcd                      0                   db550e957d0e1       etcd-old-k8s-version-805353                      kube-system
	cd823b9643e46       f6f496300a2ae       2 minutes ago        Exited              kube-scheduler            0                   f146a7c8125b1       kube-scheduler-old-k8s-version-805353            kube-system
	
	
	==> containerd <==
	Dec 28 07:04:27 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:27.712142159Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.022392655Z" level=info msg="StopPodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.023048564Z" level=info msg="TearDown network for sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.023107495Z" level=info msg="StopPodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025446268Z" level=info msg="RemovePodSandbox for \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025495090Z" level=info msg="Forcibly stopping sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.025991463Z" level=info msg="TearDown network for sandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.028783730Z" level=info msg="Ensure that sandbox a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5 in task-service has been cleanup successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.033903017Z" level=info msg="RemovePodSandbox \"a41ff3cccd8312675205cf4d00a4248feccc2ab7ecf3af6e201b7f115d5459a5\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.034885267Z" level=info msg="StopPodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.067440067Z" level=info msg="TearDown network for sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068006970Z" level=info msg="StopPodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068377411Z" level=info msg="RemovePodSandbox for \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.068414134Z" level=info msg="Forcibly stopping sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\""
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.090857132Z" level=info msg="TearDown network for sandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.093336854Z" level=info msg="Ensure that sandbox f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9 in task-service has been cleanup successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.096475429Z" level=info msg="RemovePodSandbox \"f7eddc20231c06dbe235523053a65e430d6d2d18706ce9bd09002fb1ddca10a9\" returns successfully"
	Dec 28 07:04:37 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:37.135591826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.320595015Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.374750946Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.374812586Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.375690079Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.414950351Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.416238367Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:38 old-k8s-version-805353 containerd[448]: time="2025-12-28T07:04:38.416246430Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-805353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-805353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-805353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-805353
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:02:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:04:37 +0000   Sun, 28 Dec 2025 07:04:37 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-805353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                e2225a48-4058-4e27-bcb6-972f2816af01
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-kcdsc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-805353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-qcscm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-805353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-805353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-sd5kh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-805353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-57f55c9bc5-kwn5g                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         80s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5wlfr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wngnn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-805353 event: Registered Node old-k8s-version-805353 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-805353 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  61s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           45s                  node-controller  Node old-k8s-version-805353 event: Registered Node old-k8s-version-805353 in Controller
	  Normal  Starting                 3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                   kubelet          Node old-k8s-version-805353 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                   kubelet          Node old-k8s-version-805353 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                   kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:04:40 up  3:47,  0 user,  load average: 2.15, 2.88, 10.67
	Linux old-k8s-version-805353 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.016945    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d6aeb882-f645-4426-a4c9-e532ab8e57e7" podNamespace="kube-system" podName="coredns-5dd5756b68-kcdsc"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017051    2392 topology_manager.go:215] "Topology Admit Handler" podUID="53e9f353-a057-47be-8ab7-0544989c007f" podNamespace="kube-system" podName="kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017139    2392 topology_manager.go:215] "Topology Admit Handler" podUID="b63578b0-6b5a-40c8-a669-ee44a8351cc2" podNamespace="kube-system" podName="storage-provisioner"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017236    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d9d29e50-d42e-4587-a782-865a82530db0" podNamespace="default" podName="busybox"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017322    2392 topology_manager.go:215] "Topology Admit Handler" podUID="61fd49e7-f7e7-4a20-9615-057de964bc02" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-kwn5g"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017399    2392 topology_manager.go:215] "Topology Admit Handler" podUID="d601191e-5f70-4d10-9a9b-58c41c86d1d4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-wngnn"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.017469    2392 topology_manager.go:215] "Topology Admit Handler" podUID="2c6d7704-ddd1-42a1-95f5-23d0199d28e3" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-5wlfr"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.023297    2392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.044955    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-cni-cfg\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045078    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a3c98b1-07c1-4039-ab3b-16af3801cac8-lib-modules\") pod \"kube-proxy-sd5kh\" (UID: \"7a3c98b1-07c1-4039-ab3b-16af3801cac8\") " pod="kube-system/kube-proxy-sd5kh"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045127    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-xtables-lock\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045158    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e9f353-a057-47be-8ab7-0544989c007f-lib-modules\") pod \"kindnet-qcscm\" (UID: \"53e9f353-a057-47be-8ab7-0544989c007f\") " pod="kube-system/kindnet-qcscm"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045726    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b63578b0-6b5a-40c8-a669-ee44a8351cc2-tmp\") pod \"storage-provisioner\" (UID: \"b63578b0-6b5a-40c8-a669-ee44a8351cc2\") " pod="kube-system/storage-provisioner"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: I1228 07:04:38.045911    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a3c98b1-07c1-4039-ab3b-16af3801cac8-xtables-lock\") pod \"kube-proxy-sd5kh\" (UID: \"7a3c98b1-07c1-4039-ab3b-16af3801cac8\") " pod="kube-system/kube-proxy-sd5kh"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.124772    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-805353\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.125266    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-old-k8s-version-805353\" already exists" pod="kube-system/kube-apiserver-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.125544    2392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-old-k8s-version-805353\" already exists" pod="kube-system/etcd-old-k8s-version-805353"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375145    2392 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375208    2392 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375537    2392 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q24p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-5wlfr_kubernetes-dashboard(2c6d7704-ddd1-42a1-95f5-23d0199d28e3): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image "registry.k8s.io/echoserver:1.4": not implemented: media type "application/vnd.docker.distribution.manifest.v1+prettyjws" is no longer supported since containerd v2.1, please rebuild the image as "application/vnd.docker.distribution.ma
nifest.v2+json" or "application/vnd.oci.image.manifest.v1+json"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.375654    2392 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5wlfr" podUID="2c6d7704-ddd1-42a1-95f5-23d0199d28e3"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416640    2392 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416702    2392 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416872    2392 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7xn9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-kwn5g_kube-system(61fd49e7-f7e7-4a20-9615-057de964bc02): ErrImagePull: failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 28 07:04:38 old-k8s-version-805353 kubelet[2392]: E1228 07:04:38.416969    2392 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-kwn5g" podUID="61fd49e7-f7e7-4a20-9615-057de964bc02"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805353 -n old-k8s-version-805353
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-805353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr: exit status 1 (58.898088ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kwn5g" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-5wlfr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-805353 describe pod metrics-server-57f55c9bc5-kwn5g dashboard-metrics-scraper-5f989dc9cf-5wlfr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-129908 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908: exit status 2 (335.775799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908: exit status 2 (407.250862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-129908 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-129908
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-129908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437",
	        "Created": "2025-12-28T07:02:56.427750014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 882451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:55.998305648Z",
	            "FinishedAt": "2025-12-28T07:03:54.908388831Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/hosts",
	        "LogPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437-json.log",
	        "Name": "/default-k8s-diff-port-129908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-129908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-129908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437",
	                "LowerDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-129908",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-129908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-129908",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-129908",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-129908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3ed413bba346cc53c7538a7e6b8e51f6bb5b193c816299a1c5aaf86b7d0c7cba",
	            "SandboxKey": "/var/run/docker/netns/3ed413bba346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-129908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9d99ed26244369d409963449a6dbb632f98886c438bb8df08d0e6054451cb281",
	                    "EndpointID": "263ce74b40f91c9e07d4ef160abaf52899dce473add501eb0fa35f16064b3a68",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:1b:ef:c8:e0:da",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-129908",
	                        "f3ad300c2e55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-129908 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:04:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:04:57.195692  893941 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:04:57.195974  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.195986  893941 out.go:374] Setting ErrFile to fd 2...
	I1228 07:04:57.195990  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.196196  893941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:04:57.196751  893941 out.go:368] Setting JSON to false
	I1228 07:04:57.198139  893941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13641,"bootTime":1766891856,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:04:57.198212  893941 start.go:143] virtualization: kvm guest
	I1228 07:04:57.202341  893941 out.go:179] * [test-preload-dl-github-941249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:04:57.203691  893941 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:04:57.203750  893941 notify.go:221] Checking for updates...
	I1228 07:04:57.206035  893941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:04:57.207229  893941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:57.208389  893941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:04:57.209477  893941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:04:57.210591  893941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:04:57.212115  893941 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212300  893941 config.go:182] Loaded profile config "embed-certs-982151": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212438  893941 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212567  893941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:04:57.238559  893941 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:04:57.238660  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.297433  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.286031992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.297580  893941 docker.go:319] overlay module found
	I1228 07:04:57.298969  893941 out.go:179] * Using the docker driver based on user configuration
	I1228 07:04:57.300613  893941 start.go:309] selected driver: docker
	I1228 07:04:57.300634  893941 start.go:928] validating driver "docker" against <nil>
	I1228 07:04:57.300764  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.373040  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.362531212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.373260  893941 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:04:57.374009  893941 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 07:04:57.374205  893941 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:04:57.375934  893941 out.go:179] * Using Docker driver with root privileges
	I1228 07:04:57.376909  893941 cni.go:84] Creating CNI manager for ""
	I1228 07:04:57.376990  893941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:57.377006  893941 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:04:57.377090  893941 start.go:353] cluster config:
	{Name:test-preload-dl-github-941249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-941249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s Rosetta:false}
	I1228 07:04:57.378318  893941 out.go:179] * Starting "test-preload-dl-github-941249" primary control-plane node in "test-preload-dl-github-941249" cluster
	I1228 07:04:57.379381  893941 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:04:57.380562  893941 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:04:57.381578  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.381667  893941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:04:57.404521  893941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:04:57.404541  893941 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 07:04:57.404651  893941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 07:04:57.404672  893941 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 07:04:57.404678  893941 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 07:04:57.404691  893941 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 07:04:57.694006  893941 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.694047  893941 cache.go:65] Caching tarball of preloaded images
	I1228 07:04:57.694272  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.696475  893941 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	I1228 07:04:54.156136  890975 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:54.156327  890975 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:54.156404  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.182897  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.182922  890975 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:54.182974  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.210854  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.210878  890975 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:54.210886  890975 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:04:54.210976  890975 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:54.211041  890975 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:54.243309  890975 cni.go:84] Creating CNI manager for ""
	I1228 07:04:54.243337  890975 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:54.243357  890975 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:04:54.243390  890975 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:54.243560  890975 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:54.243637  890975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:54.252672  890975 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:54.252750  890975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:54.261653  890975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:04:54.274491  890975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:54.290148  890975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:04:54.303677  890975 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:54.307765  890975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:54.319329  890975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:54.407309  890975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:54.428994  890975 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:04:54.429018  890975 certs.go:195] generating shared ca certs ...
	I1228 07:04:54.429041  890975 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.429213  890975 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:54.429289  890975 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:54.429303  890975 certs.go:257] generating profile certs ...
	I1228 07:04:54.429385  890975 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:04:54.429403  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt with IP's: []
	I1228 07:04:54.548598  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt ...
	I1228 07:04:54.548635  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt: {Name:mkd08dc3defb41e6cb9598c503c16c96e90f0b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548840  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key ...
	I1228 07:04:54.548858  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key: {Name:mk221973691b93552b36f745648af0626098b6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548982  890975 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:04:54.549002  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1228 07:04:54.752311  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 ...
	I1228 07:04:54.752343  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9: {Name:mk7cf8b3d021c8d256fc9c6b1dfeef22fd232313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752541  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 ...
	I1228 07:04:54.752561  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9: {Name:mkd1e03fc46bf83eaf251f6f96dac8ed16146eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752695  890975 certs.go:382] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt
	I1228 07:04:54.752805  890975 certs.go:386] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key
	I1228 07:04:54.752894  890975 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:04:54.752915  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt with IP's: []
	I1228 07:04:54.969153  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt ...
	I1228 07:04:54.969188  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt: {Name:mk4aac3247bad9d4fd84c69c062297c8ff05ca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969377  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key ...
	I1228 07:04:54.969399  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key: {Name:mke7a1ce2441ed9195ab16b7651e6bfb868f76fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969599  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:54.969654  890975 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:54.969711  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:54.969757  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:54.969793  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:54.969829  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:54.969892  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:54.970562  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:54.992409  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:55.010822  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:55.028237  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:55.045540  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:04:55.065840  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:04:55.085090  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:55.103279  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:04:55.121123  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:55.144357  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:55.163514  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:55.180645  890975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:55.193285  890975 ssh_runner.go:195] Run: openssl version
	I1228 07:04:55.199453  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.207377  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:55.214667  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218714  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218773  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.252941  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:55.260873  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/555878.pem /etc/ssl/certs/51391683.0
	I1228 07:04:55.268681  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.275856  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:55.283139  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.286993  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.287045  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.321768  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.329608  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5558782.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.337022  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.344669  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:55.352248  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356339  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356398  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.396110  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:55.405045  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:04:55.412931  890975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:55.417426  890975 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:04:55.417515  890975 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:55.417636  890975 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:04:55.429380  890975 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:04:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:04:55.429464  890975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:55.437187  890975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:04:55.445315  890975 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:04:55.445367  890975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:04:55.452965  890975 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:04:55.452982  890975 kubeadm.go:158] found existing configuration files:
	
	I1228 07:04:55.453032  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:04:55.460809  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:04:55.460879  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:04:55.468448  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:04:55.476086  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:04:55.476140  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:04:55.483556  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.490991  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:04:55.491116  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.498341  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:04:55.506479  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:04:55.506521  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:04:55.513872  890975 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:04:55.617002  890975 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 07:04:55.678350  890975 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	909e7464e6cea       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   150d025f67a54       storage-provisioner                                    kube-system
	84f058c3cf22a       07655ddf2eebe       47 seconds ago       Running             kubernetes-dashboard      0                   5db513c794987       kubernetes-dashboard-b84665fb8-flkqr                   kubernetes-dashboard
	cbf8cde051161       4921d7a6dffa9       54 seconds ago       Running             kindnet-cni               1                   a56a0ad3dbed9       kindnet-x5db5                                          kube-system
	b906d5ce1c58a       56cc512116c8f       55 seconds ago       Running             busybox                   1                   4c6440786264a       busybox                                                default
	61612ff6835aa       aa5e3ebc0dfed       55 seconds ago       Running             coredns                   1                   c604975bc47af       coredns-7d764666f9-mbfzh                               kube-system
	eb227c5a67e44       6e38f40d628db       55 seconds ago       Exited              storage-provisioner       1                   150d025f67a54       storage-provisioner                                    kube-system
	54f1a232ecef9       32652ff1bbe6b       55 seconds ago       Running             kube-proxy                1                   1afd4847ac04c       kube-proxy-rzg9h                                       kube-system
	532d30921a98f       550794e3b12ac       58 seconds ago       Running             kube-scheduler            1                   edd1fe84d0d26       kube-scheduler-default-k8s-diff-port-129908            kube-system
	1b5244e048b13       2c9a4b058bd7e       58 seconds ago       Running             kube-controller-manager   1                   ac3815021e369       kube-controller-manager-default-k8s-diff-port-129908   kube-system
	994dda5ef4c77       0a108f7189562       58 seconds ago       Running             etcd                      1                   c99d6eb6970a9       etcd-default-k8s-diff-port-129908                      kube-system
	d878d80d5e69a       5c6acd67e9cd1       58 seconds ago       Running             kube-apiserver            1                   7fbfff1ffc787       kube-apiserver-default-k8s-diff-port-129908            kube-system
	d5e2c9cff97ab       56cc512116c8f       About a minute ago   Exited              busybox                   0                   476deabe84702       busybox                                                default
	152a15a03ce93       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   31e74098495e8       coredns-7d764666f9-mbfzh                               kube-system
	c3a11f91c2c38       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   dcfa59b155653       kindnet-x5db5                                          kube-system
	c9ebd507d816b       32652ff1bbe6b       About a minute ago   Exited              kube-proxy                0                   d7559b5b8f597       kube-proxy-rzg9h                                       kube-system
	e533e9b8d21eb       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            0                   c185c7eb98f51       kube-apiserver-default-k8s-diff-port-129908            kube-system
	fa7b95e95263a       550794e3b12ac       About a minute ago   Exited              kube-scheduler            0                   c8c6519e2c9cf       kube-scheduler-default-k8s-diff-port-129908            kube-system
	3a7d652f1932d       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   0                   3a1fcdaaafb83       kube-controller-manager-default-k8s-diff-port-129908   kube-system
	1322b3b451f64       0a108f7189562       About a minute ago   Exited              etcd                      0                   4c413ee876c59       etcd-default-k8s-diff-port-129908                      kube-system
	
	
	==> containerd <==
	Dec 28 07:04:54 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:54.149862014Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.821846142Z" level=info msg="StopPodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.853567048Z" level=info msg="TearDown network for sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.853630595Z" level=info msg="StopPodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.854825423Z" level=info msg="RemovePodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.854880702Z" level=info msg="Forcibly stopping sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.891691355Z" level=info msg="TearDown network for sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.896126577Z" level=info msg="Ensure that sandbox 7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7 in task-service has been cleanup successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.899689580Z" level=info msg="RemovePodSandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900278059Z" level=info msg="StopPodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900811324Z" level=info msg="TearDown network for sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900861747Z" level=info msg="StopPodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901417031Z" level=info msg="RemovePodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901458946Z" level=info msg="Forcibly stopping sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901913772Z" level=info msg="TearDown network for sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.904264229Z" level=info msg="Ensure that sandbox 8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4 in task-service has been cleanup successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.909759677Z" level=info msg="RemovePodSandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" returns successfully"
	Dec 28 07:04:59 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:59.071511238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.105992899Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.150079714Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.151241544Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.151269384Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.152131908Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.201865357Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.201994039Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-129908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-129908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-129908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_03_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:03:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-129908
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-129908
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                6c3c2967-9bf8-4a72-930f-d1c5a16decea
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-7d764666f9-mbfzh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-129908                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-x5db5                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-129908             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-129908    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-rzg9h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-129908             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 metrics-server-5d785b57d4-6h7tv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         78s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7tc75              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-flkqr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node default-k8s-diff-port-129908 event: Registered Node default-k8s-diff-port-129908 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-129908 event: Registered Node default-k8s-diff-port-129908 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:00 up  3:47,  0 user,  load average: 3.25, 3.05, 10.56
	Linux default-k8s-diff-port-129908 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.834277    2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fb316251-22e1-43e0-b728-1f470d02cacb-cni-cfg\") pod \"kindnet-x5db5\" (UID: \"fb316251-22e1-43e0-b728-1f470d02cacb\") " pod="kube-system/kindnet-x5db5"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935112    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935285    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935376    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935644    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943793    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943914    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908" containerName="kube-scheduler"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943960    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944036    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908" containerName="kube-apiserver"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944569    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944664    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908" containerName="kube-controller-manager"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.945028    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-129908\" already exists" pod="kube-system/etcd-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.945096    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-129908" containerName="etcd"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.151582    2399 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.151679    2399 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.152029    2399 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-6h7tv_kube-system(3f1c607c-1aeb-4fc7-9865-8b161c339288): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.152093    2399 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-6h7tv" podUID="3f1c607c-1aeb-4fc7-9865-8b161c339288"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202496    2399 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202583    2399 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202887    2399 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-7tc75_kubernetes-dashboard(99241345-6a9c-4473-a743-ec19ad1b10a7): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202962    2399 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tc75" podUID="99241345-6a9c-4473-a743-ec19ad1b10a7"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939190    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908" containerName="kube-apiserver"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939559    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-129908" containerName="etcd"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939987    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908" containerName="kube-scheduler"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.940206    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75: exit status 1 (77.015095ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-6h7tv" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-7tc75" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-129908
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-129908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437",
	        "Created": "2025-12-28T07:02:56.427750014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 882451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:55.998305648Z",
	            "FinishedAt": "2025-12-28T07:03:54.908388831Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/hosts",
	        "LogPath": "/var/lib/docker/containers/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437/f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437-json.log",
	        "Name": "/default-k8s-diff-port-129908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-129908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-129908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3ad300c2e556f4d1d26048ce6da2a6ca4440eff851f10c11aac4b5101eda437",
	                "LowerDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9b42e2ff43ef9b1d7d999eec7f438588819ad6fc123e7b3b06d1e993e7e0a39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-129908",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-129908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-129908",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-129908",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-129908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3ed413bba346cc53c7538a7e6b8e51f6bb5b193c816299a1c5aaf86b7d0c7cba",
	            "SandboxKey": "/var/run/docker/netns/3ed413bba346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-129908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9d99ed26244369d409963449a6dbb632f98886c438bb8df08d0e6054451cb281",
	                    "EndpointID": "263ce74b40f91c9e07d4ef160abaf52899dce473add501eb0fa35f16064b3a68",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:1b:ef:c8:e0:da",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-129908",
	                        "f3ad300c2e55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-129908 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:04:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:04:57.195692  893941 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:04:57.195974  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.195986  893941 out.go:374] Setting ErrFile to fd 2...
	I1228 07:04:57.195990  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.196196  893941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:04:57.196751  893941 out.go:368] Setting JSON to false
	I1228 07:04:57.198139  893941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13641,"bootTime":1766891856,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:04:57.198212  893941 start.go:143] virtualization: kvm guest
	I1228 07:04:57.202341  893941 out.go:179] * [test-preload-dl-github-941249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:04:57.203691  893941 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:04:57.203750  893941 notify.go:221] Checking for updates...
	I1228 07:04:57.206035  893941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:04:57.207229  893941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:57.208389  893941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:04:57.209477  893941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:04:57.210591  893941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:04:57.212115  893941 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212300  893941 config.go:182] Loaded profile config "embed-certs-982151": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212438  893941 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212567  893941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:04:57.238559  893941 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:04:57.238660  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.297433  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.286031992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.297580  893941 docker.go:319] overlay module found
	I1228 07:04:57.298969  893941 out.go:179] * Using the docker driver based on user configuration
	I1228 07:04:57.300613  893941 start.go:309] selected driver: docker
	I1228 07:04:57.300634  893941 start.go:928] validating driver "docker" against <nil>
	I1228 07:04:57.300764  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.373040  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.362531212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.373260  893941 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:04:57.374009  893941 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 07:04:57.374205  893941 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:04:57.375934  893941 out.go:179] * Using Docker driver with root privileges
	I1228 07:04:57.376909  893941 cni.go:84] Creating CNI manager for ""
	I1228 07:04:57.376990  893941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:57.377006  893941 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:04:57.377090  893941 start.go:353] cluster config:
	{Name:test-preload-dl-github-941249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-941249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s Rosetta:false}
	I1228 07:04:57.378318  893941 out.go:179] * Starting "test-preload-dl-github-941249" primary control-plane node in "test-preload-dl-github-941249" cluster
	I1228 07:04:57.379381  893941 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:04:57.380562  893941 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:04:57.381578  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.381667  893941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:04:57.404521  893941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:04:57.404541  893941 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 07:04:57.404651  893941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 07:04:57.404672  893941 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 07:04:57.404678  893941 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 07:04:57.404691  893941 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 07:04:57.694006  893941 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.694047  893941 cache.go:65] Caching tarball of preloaded images
	I1228 07:04:57.694272  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.696475  893941 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	I1228 07:04:54.156136  890975 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:54.156327  890975 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:54.156404  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.182897  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.182922  890975 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:54.182974  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.210854  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.210878  890975 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:54.210886  890975 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:04:54.210976  890975 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:54.211041  890975 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:54.243309  890975 cni.go:84] Creating CNI manager for ""
	I1228 07:04:54.243337  890975 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:54.243357  890975 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:04:54.243390  890975 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:54.243560  890975 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:54.243637  890975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:54.252672  890975 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:54.252750  890975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:54.261653  890975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:04:54.274491  890975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:54.290148  890975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:04:54.303677  890975 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:54.307765  890975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:54.319329  890975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:54.407309  890975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:54.428994  890975 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:04:54.429018  890975 certs.go:195] generating shared ca certs ...
	I1228 07:04:54.429041  890975 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.429213  890975 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:54.429289  890975 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:54.429303  890975 certs.go:257] generating profile certs ...
	I1228 07:04:54.429385  890975 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:04:54.429403  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt with IP's: []
	I1228 07:04:54.548598  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt ...
	I1228 07:04:54.548635  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt: {Name:mkd08dc3defb41e6cb9598c503c16c96e90f0b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548840  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key ...
	I1228 07:04:54.548858  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key: {Name:mk221973691b93552b36f745648af0626098b6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548982  890975 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:04:54.549002  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1228 07:04:54.752311  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 ...
	I1228 07:04:54.752343  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9: {Name:mk7cf8b3d021c8d256fc9c6b1dfeef22fd232313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752541  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 ...
	I1228 07:04:54.752561  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9: {Name:mkd1e03fc46bf83eaf251f6f96dac8ed16146eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752695  890975 certs.go:382] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt
	I1228 07:04:54.752805  890975 certs.go:386] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key
	I1228 07:04:54.752894  890975 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:04:54.752915  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt with IP's: []
	I1228 07:04:54.969153  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt ...
	I1228 07:04:54.969188  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt: {Name:mk4aac3247bad9d4fd84c69c062297c8ff05ca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969377  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key ...
	I1228 07:04:54.969399  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key: {Name:mke7a1ce2441ed9195ab16b7651e6bfb868f76fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969599  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:54.969654  890975 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:54.969711  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:54.969757  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:54.969793  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:54.969829  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:54.969892  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:54.970562  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:54.992409  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:55.010822  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:55.028237  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:55.045540  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:04:55.065840  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:04:55.085090  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:55.103279  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:04:55.121123  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:55.144357  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:55.163514  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:55.180645  890975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:55.193285  890975 ssh_runner.go:195] Run: openssl version
	I1228 07:04:55.199453  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.207377  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:55.214667  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218714  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218773  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.252941  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:55.260873  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/555878.pem /etc/ssl/certs/51391683.0
	I1228 07:04:55.268681  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.275856  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:55.283139  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.286993  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.287045  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.321768  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.329608  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5558782.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.337022  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.344669  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:55.352248  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356339  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356398  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.396110  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:55.405045  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:04:55.412931  890975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:55.417426  890975 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:04:55.417515  890975 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:55.417636  890975 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:04:55.429380  890975 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:04:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:04:55.429464  890975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:55.437187  890975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:04:55.445315  890975 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:04:55.445367  890975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:04:55.452965  890975 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:04:55.452982  890975 kubeadm.go:158] found existing configuration files:
	
	I1228 07:04:55.453032  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:04:55.460809  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:04:55.460879  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:04:55.468448  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:04:55.476086  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:04:55.476140  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:04:55.483556  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.490991  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:04:55.491116  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.498341  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:04:55.506479  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:04:55.506521  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:04:55.513872  890975 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:04:55.617002  890975 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 07:04:55.678350  890975 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	909e7464e6cea       6e38f40d628db       15 seconds ago       Running             storage-provisioner       2                   150d025f67a54       storage-provisioner                                    kube-system
	84f058c3cf22a       07655ddf2eebe       49 seconds ago       Running             kubernetes-dashboard      0                   5db513c794987       kubernetes-dashboard-b84665fb8-flkqr                   kubernetes-dashboard
	cbf8cde051161       4921d7a6dffa9       56 seconds ago       Running             kindnet-cni               1                   a56a0ad3dbed9       kindnet-x5db5                                          kube-system
	b906d5ce1c58a       56cc512116c8f       57 seconds ago       Running             busybox                   1                   4c6440786264a       busybox                                                default
	61612ff6835aa       aa5e3ebc0dfed       57 seconds ago       Running             coredns                   1                   c604975bc47af       coredns-7d764666f9-mbfzh                               kube-system
	eb227c5a67e44       6e38f40d628db       57 seconds ago       Exited              storage-provisioner       1                   150d025f67a54       storage-provisioner                                    kube-system
	54f1a232ecef9       32652ff1bbe6b       57 seconds ago       Running             kube-proxy                1                   1afd4847ac04c       kube-proxy-rzg9h                                       kube-system
	532d30921a98f       550794e3b12ac       About a minute ago   Running             kube-scheduler            1                   edd1fe84d0d26       kube-scheduler-default-k8s-diff-port-129908            kube-system
	1b5244e048b13       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   1                   ac3815021e369       kube-controller-manager-default-k8s-diff-port-129908   kube-system
	994dda5ef4c77       0a108f7189562       About a minute ago   Running             etcd                      1                   c99d6eb6970a9       etcd-default-k8s-diff-port-129908                      kube-system
	d878d80d5e69a       5c6acd67e9cd1       About a minute ago   Running             kube-apiserver            1                   7fbfff1ffc787       kube-apiserver-default-k8s-diff-port-129908            kube-system
	d5e2c9cff97ab       56cc512116c8f       About a minute ago   Exited              busybox                   0                   476deabe84702       busybox                                                default
	152a15a03ce93       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   31e74098495e8       coredns-7d764666f9-mbfzh                               kube-system
	c3a11f91c2c38       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   dcfa59b155653       kindnet-x5db5                                          kube-system
	c9ebd507d816b       32652ff1bbe6b       About a minute ago   Exited              kube-proxy                0                   d7559b5b8f597       kube-proxy-rzg9h                                       kube-system
	e533e9b8d21eb       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            0                   c185c7eb98f51       kube-apiserver-default-k8s-diff-port-129908            kube-system
	fa7b95e95263a       550794e3b12ac       About a minute ago   Exited              kube-scheduler            0                   c8c6519e2c9cf       kube-scheduler-default-k8s-diff-port-129908            kube-system
	3a7d652f1932d       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   0                   3a1fcdaaafb83       kube-controller-manager-default-k8s-diff-port-129908   kube-system
	1322b3b451f64       0a108f7189562       About a minute ago   Exited              etcd                      0                   4c413ee876c59       etcd-default-k8s-diff-port-129908                      kube-system
	
	
	==> containerd <==
	Dec 28 07:04:54 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:54.149862014Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.821846142Z" level=info msg="StopPodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.853567048Z" level=info msg="TearDown network for sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.853630595Z" level=info msg="StopPodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.854825423Z" level=info msg="RemovePodSandbox for \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.854880702Z" level=info msg="Forcibly stopping sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.891691355Z" level=info msg="TearDown network for sandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.896126577Z" level=info msg="Ensure that sandbox 7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7 in task-service has been cleanup successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.899689580Z" level=info msg="RemovePodSandbox \"7216798075a547b211196930482b3e10a4add00f3d5bc32c49f9829f0b5049b7\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900278059Z" level=info msg="StopPodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900811324Z" level=info msg="TearDown network for sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.900861747Z" level=info msg="StopPodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" returns successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901417031Z" level=info msg="RemovePodSandbox for \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901458946Z" level=info msg="Forcibly stopping sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\""
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.901913772Z" level=info msg="TearDown network for sandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.904264229Z" level=info msg="Ensure that sandbox 8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4 in task-service has been cleanup successfully"
	Dec 28 07:04:58 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:58.909759677Z" level=info msg="RemovePodSandbox \"8e82f987aa396d8f184a23c3ea47c3efaf98d82de2bb38a5f3633d14775501a4\" returns successfully"
	Dec 28 07:04:59 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:04:59.071511238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.105992899Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.150079714Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.151241544Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.151269384Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.152131908Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.201865357Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:05:00 default-k8s-diff-port-129908 containerd[452]: time="2025-12-28T07:05:00.201994039Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-129908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-129908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-129908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_03_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:03:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-129908
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:04:59 +0000   Sun, 28 Dec 2025 07:03:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-129908
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                6c3c2967-9bf8-4a72-930f-d1c5a16decea
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-mbfzh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-129908                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-x5db5                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-129908             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-129908    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-rzg9h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-129908             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 metrics-server-5d785b57d4-6h7tv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         80s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7tc75              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-flkqr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node default-k8s-diff-port-129908 event: Registered Node default-k8s-diff-port-129908 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node default-k8s-diff-port-129908 event: Registered Node default-k8s-diff-port-129908 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:02 up  3:47,  0 user,  load average: 3.25, 3.05, 10.56
	Linux default-k8s-diff-port-129908 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935376    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: I1228 07:04:59.935644    2399 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943793    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943914    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908" containerName="kube-scheduler"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.943960    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944036    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908" containerName="kube-apiserver"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944569    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-129908\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.944664    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908" containerName="kube-controller-manager"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.945028    2399 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-129908\" already exists" pod="kube-system/etcd-default-k8s-diff-port-129908"
	Dec 28 07:04:59 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:04:59.945096    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-129908" containerName="etcd"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.151582    2399 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.151679    2399 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.152029    2399 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-6h7tv_kube-system(3f1c607c-1aeb-4fc7-9865-8b161c339288): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.152093    2399 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-6h7tv" podUID="3f1c607c-1aeb-4fc7-9865-8b161c339288"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202496    2399 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202583    2399 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202887    2399 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-7tc75_kubernetes-dashboard(99241345-6a9c-4473-a743-ec19ad1b10a7): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.202962    2399 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tc75" podUID="99241345-6a9c-4473-a743-ec19ad1b10a7"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939190    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908" containerName="kube-apiserver"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939559    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-129908" containerName="etcd"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.939987    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-129908" containerName="kube-scheduler"
	Dec 28 07:05:00 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:00.940206    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-129908" containerName="kube-controller-manager"
	Dec 28 07:05:01 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:01.942978    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-129908" containerName="etcd"
	Dec 28 07:05:01 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:01.943531    2399 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-129908" containerName="kube-apiserver"
	Dec 28 07:05:02 default-k8s-diff-port-129908 kubelet[2399]: E1228 07:05:02.122641    2399 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mbfzh" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75: exit status 1 (75.524275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-6h7tv" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-7tc75" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-129908 describe pod metrics-server-5d785b57d4-6h7tv dashboard-metrics-scraper-867fb5f87b-7tc75: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-982151 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-982151 -n embed-certs-982151
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-982151 -n embed-certs-982151: exit status 2 (362.392702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-982151 -n embed-certs-982151
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-982151 -n embed-certs-982151: exit status 2 (403.243163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-982151 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-982151 -n embed-certs-982151
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-982151 -n embed-certs-982151
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-982151
helpers_test.go:244: (dbg) docker inspect embed-certs-982151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44",
	        "Created": "2025-12-28T07:02:47.360525245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 880436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:48.373918466Z",
	            "FinishedAt": "2025-12-28T07:03:47.296282976Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/hostname",
	        "HostsPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/hosts",
	        "LogPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44-json.log",
	        "Name": "/embed-certs-982151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-982151:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-982151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44",
	                "LowerDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-982151",
	                "Source": "/var/lib/docker/volumes/embed-certs-982151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-982151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-982151",
	                "name.minikube.sigs.k8s.io": "embed-certs-982151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6fb2404811989d854f506f2894a683ce2643f9e1cbe5e9e55c9c7aca2bddbfdc",
	            "SandboxKey": "/var/run/docker/netns/6fb240481198",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-982151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f5d5475434581113583e143e8f4539ae4c2e37ce2af65439a0406393dddc242e",
	                    "EndpointID": "0f6994da231d24f669cacc8d79a0b09efe599c536fc21b51e330ccb36ef45f77",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:ce:ca:1d:a6:4e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-982151",
	                        "53ffd3b986c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-982151 -n embed-certs-982151
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-982151 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p default-k8s-diff-port-129908 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:04:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:04:57.195692  893941 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:04:57.195974  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.195986  893941 out.go:374] Setting ErrFile to fd 2...
	I1228 07:04:57.195990  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.196196  893941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:04:57.196751  893941 out.go:368] Setting JSON to false
	I1228 07:04:57.198139  893941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13641,"bootTime":1766891856,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:04:57.198212  893941 start.go:143] virtualization: kvm guest
	I1228 07:04:57.202341  893941 out.go:179] * [test-preload-dl-github-941249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:04:57.203691  893941 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:04:57.203750  893941 notify.go:221] Checking for updates...
	I1228 07:04:57.206035  893941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:04:57.207229  893941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:57.208389  893941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:04:57.209477  893941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:04:57.210591  893941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:04:57.212115  893941 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212300  893941 config.go:182] Loaded profile config "embed-certs-982151": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212438  893941 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212567  893941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:04:57.238559  893941 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:04:57.238660  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.297433  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.286031992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.297580  893941 docker.go:319] overlay module found
	I1228 07:04:57.298969  893941 out.go:179] * Using the docker driver based on user configuration
	I1228 07:04:57.300613  893941 start.go:309] selected driver: docker
	I1228 07:04:57.300634  893941 start.go:928] validating driver "docker" against <nil>
	I1228 07:04:57.300764  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.373040  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.362531212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.373260  893941 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:04:57.374009  893941 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 07:04:57.374205  893941 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:04:57.375934  893941 out.go:179] * Using Docker driver with root privileges
	I1228 07:04:57.376909  893941 cni.go:84] Creating CNI manager for ""
	I1228 07:04:57.376990  893941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:57.377006  893941 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:04:57.377090  893941 start.go:353] cluster config:
	{Name:test-preload-dl-github-941249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-941249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s Rosetta:false}
	I1228 07:04:57.378318  893941 out.go:179] * Starting "test-preload-dl-github-941249" primary control-plane node in "test-preload-dl-github-941249" cluster
	I1228 07:04:57.379381  893941 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:04:57.380562  893941 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:04:57.381578  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.381667  893941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:04:57.404521  893941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:04:57.404541  893941 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 07:04:57.404651  893941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 07:04:57.404672  893941 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 07:04:57.404678  893941 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 07:04:57.404691  893941 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 07:04:57.694006  893941 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.694047  893941 cache.go:65] Caching tarball of preloaded images
	I1228 07:04:57.694272  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.696475  893941 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	I1228 07:04:54.156136  890975 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:54.156327  890975 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:54.156404  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.182897  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.182922  890975 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:54.182974  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.210854  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.210878  890975 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:54.210886  890975 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:04:54.210976  890975 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:54.211041  890975 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:54.243309  890975 cni.go:84] Creating CNI manager for ""
	I1228 07:04:54.243337  890975 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:54.243357  890975 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:04:54.243390  890975 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:54.243560  890975 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:54.243637  890975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:54.252672  890975 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:54.252750  890975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:54.261653  890975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:04:54.274491  890975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:54.290148  890975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:04:54.303677  890975 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:54.307765  890975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:54.319329  890975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:54.407309  890975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:54.428994  890975 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:04:54.429018  890975 certs.go:195] generating shared ca certs ...
	I1228 07:04:54.429041  890975 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.429213  890975 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:54.429289  890975 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:54.429303  890975 certs.go:257] generating profile certs ...
	I1228 07:04:54.429385  890975 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:04:54.429403  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt with IP's: []
	I1228 07:04:54.548598  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt ...
	I1228 07:04:54.548635  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt: {Name:mkd08dc3defb41e6cb9598c503c16c96e90f0b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548840  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key ...
	I1228 07:04:54.548858  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key: {Name:mk221973691b93552b36f745648af0626098b6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548982  890975 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:04:54.549002  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1228 07:04:54.752311  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 ...
	I1228 07:04:54.752343  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9: {Name:mk7cf8b3d021c8d256fc9c6b1dfeef22fd232313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752541  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 ...
	I1228 07:04:54.752561  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9: {Name:mkd1e03fc46bf83eaf251f6f96dac8ed16146eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752695  890975 certs.go:382] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt
	I1228 07:04:54.752805  890975 certs.go:386] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key
	I1228 07:04:54.752894  890975 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:04:54.752915  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt with IP's: []
	I1228 07:04:54.969153  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt ...
	I1228 07:04:54.969188  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt: {Name:mk4aac3247bad9d4fd84c69c062297c8ff05ca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969377  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key ...
	I1228 07:04:54.969399  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key: {Name:mke7a1ce2441ed9195ab16b7651e6bfb868f76fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969599  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:54.969654  890975 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:54.969711  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:54.969757  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:54.969793  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:54.969829  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:54.969892  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:54.970562  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:54.992409  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:55.010822  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:55.028237  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:55.045540  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:04:55.065840  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:04:55.085090  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:55.103279  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:04:55.121123  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:55.144357  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:55.163514  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:55.180645  890975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:55.193285  890975 ssh_runner.go:195] Run: openssl version
	I1228 07:04:55.199453  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.207377  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:55.214667  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218714  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218773  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.252941  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:55.260873  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/555878.pem /etc/ssl/certs/51391683.0
	I1228 07:04:55.268681  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.275856  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:55.283139  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.286993  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.287045  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.321768  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.329608  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5558782.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.337022  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.344669  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:55.352248  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356339  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356398  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.396110  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:55.405045  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:04:55.412931  890975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:55.417426  890975 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:04:55.417515  890975 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:55.417636  890975 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:04:55.429380  890975 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:04:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:04:55.429464  890975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:55.437187  890975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:04:55.445315  890975 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:04:55.445367  890975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:04:55.452965  890975 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:04:55.452982  890975 kubeadm.go:158] found existing configuration files:
	
	I1228 07:04:55.453032  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:04:55.460809  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:04:55.460879  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:04:55.468448  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:04:55.476086  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:04:55.476140  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:04:55.483556  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.490991  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:04:55.491116  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.498341  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:04:55.506479  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:04:55.506521  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:04:55.513872  890975 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:04:55.617002  890975 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 07:04:55.678350  890975 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:04:57.697587  893941 preload.go:269] Downloading preload from https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.697605  893941 preload.go:347] getting checksum for preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4 from github api...
	I1228 07:04:58.317172  893941 preload.go:316] Got checksum from Github API "997f783aaecccd9e6aa0d5928dacc2df37b5c0c8f5b3ad6d0d15583ff23aa25f"
	I1228 07:04:58.317239  893941 download.go:114] Downloading: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=sha256:997f783aaecccd9e6aa0d5928dacc2df37b5c0c8f5b3ad6d0d15583ff23aa25f -> /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	86b3e4383cbb0       6e38f40d628db       Less than a second ago   Running             storage-provisioner       2                   7a7731f617953       storage-provisioner                          kube-system
	a07751a2d50bc       07655ddf2eebe       56 seconds ago           Running             kubernetes-dashboard      0                   775e178b603a1       kubernetes-dashboard-b84665fb8-vkxng         kubernetes-dashboard
	aeef778d320d0       6e38f40d628db       58 seconds ago           Exited              storage-provisioner       1                   7a7731f617953       storage-provisioner                          kube-system
	896bae11feabd       4921d7a6dffa9       58 seconds ago           Running             kindnet-cni               1                   be78777c6580d       kindnet-fchxm                                kube-system
	5cf5c73f6a648       56cc512116c8f       58 seconds ago           Running             busybox                   1                   2cc8d4c88ba96       busybox                                      default
	9465d0758efc1       32652ff1bbe6b       58 seconds ago           Running             kube-proxy                1                   b3c2fc0ca2a1a       kube-proxy-z29fh                             kube-system
	d8c18facb511f       aa5e3ebc0dfed       59 seconds ago           Running             coredns                   1                   7525503abb724       coredns-7d764666f9-s8grm                     kube-system
	e86500480895b       550794e3b12ac       About a minute ago       Running             kube-scheduler            1                   6c2597063c002       kube-scheduler-embed-certs-982151            kube-system
	626f204bd4544       5c6acd67e9cd1       About a minute ago       Running             kube-apiserver            1                   529dd4d36e199       kube-apiserver-embed-certs-982151            kube-system
	10281dfa1e827       2c9a4b058bd7e       About a minute ago       Running             kube-controller-manager   1                   121c677cadb20       kube-controller-manager-embed-certs-982151   kube-system
	53cabd3183dc8       0a108f7189562       About a minute ago       Running             etcd                      1                   4ec925cf36e39       etcd-embed-certs-982151                      kube-system
	5ce7735e4d622       56cc512116c8f       About a minute ago       Exited              busybox                   0                   128f064de924f       busybox                                      default
	0d989dabc06a6       aa5e3ebc0dfed       About a minute ago       Exited              coredns                   0                   250d19fac4911       coredns-7d764666f9-s8grm                     kube-system
	f43a96b841933       4921d7a6dffa9       About a minute ago       Exited              kindnet-cni               0                   86aff389319ac       kindnet-fchxm                                kube-system
	1743ad5dae239       32652ff1bbe6b       About a minute ago       Exited              kube-proxy                0                   dc6a3ee7fdb8e       kube-proxy-z29fh                             kube-system
	e50d94ac38fe6       550794e3b12ac       2 minutes ago            Exited              kube-scheduler            0                   a8412b51879d6       kube-scheduler-embed-certs-982151            kube-system
	fbbd6e7d61423       2c9a4b058bd7e       2 minutes ago            Exited              kube-controller-manager   0                   86367ae2ea31d       kube-controller-manager-embed-certs-982151   kube-system
	ab1befada8516       0a108f7189562       2 minutes ago            Exited              etcd                      0                   122075847f366       etcd-embed-certs-982151                      kube-system
	f5a44f2a692e9       5c6acd67e9cd1       2 minutes ago            Exited              kube-apiserver            0                   a75019fc3cfda       kube-apiserver-embed-certs-982151            kube-system
	
	
	==> containerd <==
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.197116636Z" level=info msg="TearDown network for sandbox \"663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.199498613Z" level=info msg="Ensure that sandbox 663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d in task-service has been cleanup successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.202936678Z" level=info msg="RemovePodSandbox \"663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d\" returns successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.204076967Z" level=info msg="StopPodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225291791Z" level=info msg="TearDown network for sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225390675Z" level=info msg="StopPodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" returns successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225834525Z" level=info msg="RemovePodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225890925Z" level=info msg="Forcibly stopping sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.245307621Z" level=info msg="TearDown network for sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.247596419Z" level=info msg="Ensure that sandbox 3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f in task-service has been cleanup successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.253249020Z" level=info msg="RemovePodSandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" returns successfully"
	Dec 28 07:05:01 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:01.658786115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.696536522Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.697773154Z" level=info msg="CreateContainer within sandbox \"7a7731f6179533162dee18d14cffc1cd413a32cbc0a84e413c9b8216466d6e98\" for container name:\"storage-provisioner\"  attempt:2"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.704786851Z" level=info msg="Container 86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.712831950Z" level=info msg="CreateContainer within sandbox \"7a7731f6179533162dee18d14cffc1cd413a32cbc0a84e413c9b8216466d6e98\" for name:\"storage-provisioner\"  attempt:2 returns container id \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.714384274Z" level=info msg="StartContainer for \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.717453536Z" level=info msg="connecting to shim 86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2" address="unix:///run/containerd/s/bb7ba940f582b0114a6cc3b069f73c3509f762c203b4565bb7cb85eb86f52f7f" protocol=ttrpc version=3
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.751349884Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.752793061Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.752886170Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.753958752Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.788726317Z" level=info msg="StartContainer for \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\" returns successfully"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.807147660Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.807314891Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               embed-certs-982151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-982151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-982151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_03_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-982151
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:03:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-982151
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                4eb3dbe3-a90b-4981-83c9-1c52100b3e2a
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-7d764666f9-s8grm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     116s
	  kube-system                 etcd-embed-certs-982151                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-fchxm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-embed-certs-982151             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-embed-certs-982151    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-z29fh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-embed-certs-982151             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 metrics-server-5d785b57d4-xsks7               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-6h2qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vkxng          0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  117s  node-controller  Node embed-certs-982151 event: Registered Node embed-certs-982151 in Controller
	  Normal  RegisteredNode  63s   node-controller  Node embed-certs-982151 event: Registered Node embed-certs-982151 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:03 up  3:47,  0 user,  load average: 3.25, 3.05, 10.56
	Linux embed-certs-982151 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519336    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519402    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519566    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519884    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.526964    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-982151\" already exists" pod="kube-system/etcd-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.527076    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-982151" containerName="etcd"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.529929    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-982151\" already exists" pod="kube-system/kube-controller-manager-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.530044    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-982151" containerName="kube-controller-manager"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.530951    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-982151\" already exists" pod="kube-system/kube-scheduler-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531042    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-982151" containerName="kube-scheduler"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531058    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-982151\" already exists" pod="kube-system/kube-apiserver-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531138    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-982151" containerName="kube-apiserver"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.693654    2437 scope.go:122] "RemoveContainer" containerID="aeef778d320d04740ce6113b65d6b4d3a0ce00b0f5f0d9dc72147b95e4070699"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753166    2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753262    2437 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753625    2437 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-xsks7_kube-system(81e3f749-eed9-432c-89e9-f4548e1b7e3f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" logger="UnhandledError"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753722    2437 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-xsks7" podUID="81e3f749-eed9-432c-89e9-f4548e1b7e3f"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807512    2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807592    2437 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807873    2437 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-6h2qr_kubernetes-dashboard(710fd7c1-d455-42cb-b399-5701310a27a7): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807946    2437 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-6h2qr" podUID="710fd7c1-d455-42cb-b399-5701310a27a7"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.527822    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-982151" containerName="kube-apiserver"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.527998    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-982151" containerName="kube-controller-manager"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.528349    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-982151" containerName="etcd"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.528443    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-982151" containerName="kube-scheduler"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-982151 -n embed-certs-982151
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-982151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr: exit status 1 (68.385245ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-xsks7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6h2qr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-982151
helpers_test.go:244: (dbg) docker inspect embed-certs-982151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44",
	        "Created": "2025-12-28T07:02:47.360525245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 880436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:03:48.373918466Z",
	            "FinishedAt": "2025-12-28T07:03:47.296282976Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/hostname",
	        "HostsPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/hosts",
	        "LogPath": "/var/lib/docker/containers/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44/53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44-json.log",
	        "Name": "/embed-certs-982151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-982151:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-982151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53ffd3b986c336b8515cc37977846893b0806ca9d055992d8bd4aa1571e09f44",
	                "LowerDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2295c514acac592a32588e40f41c7d29e0d9963a825e80cdd4c56618cac7facc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-982151",
	                "Source": "/var/lib/docker/volumes/embed-certs-982151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-982151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-982151",
	                "name.minikube.sigs.k8s.io": "embed-certs-982151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6fb2404811989d854f506f2894a683ce2643f9e1cbe5e9e55c9c7aca2bddbfdc",
	            "SandboxKey": "/var/run/docker/netns/6fb240481198",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-982151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f5d5475434581113583e143e8f4539ae4c2e37ce2af65439a0406393dddc242e",
	                    "EndpointID": "0f6994da231d24f669cacc8d79a0b09efe599c536fc21b51e330ccb36ef45f77",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:ce:ca:1d:a6:4e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-982151",
	                        "53ffd3b986c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-982151 -n embed-certs-982151
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-982151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-982151 logs -n 25: (1.885191271s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:03 UTC │
	│ start   │ -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:03 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ no-preload-456925 image list --format=json                                                                                                                                                                                                          │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ old-k8s-version-805353 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p no-preload-456925 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ unpause │ -p old-k8s-version-805353 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p no-preload-456925                                                                                                                                                                                                                                │ no-preload-456925             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777             │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055    │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151            │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p default-k8s-diff-port-129908                                                                                                                                                                                                                     │ default-k8s-diff-port-129908  │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:04:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:04:57.195692  893941 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:04:57.195974  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.195986  893941 out.go:374] Setting ErrFile to fd 2...
	I1228 07:04:57.195990  893941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:04:57.196196  893941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:04:57.196751  893941 out.go:368] Setting JSON to false
	I1228 07:04:57.198139  893941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13641,"bootTime":1766891856,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:04:57.198212  893941 start.go:143] virtualization: kvm guest
	I1228 07:04:57.202341  893941 out.go:179] * [test-preload-dl-github-941249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:04:57.203691  893941 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:04:57.203750  893941 notify.go:221] Checking for updates...
	I1228 07:04:57.206035  893941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:04:57.207229  893941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:04:57.208389  893941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:04:57.209477  893941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:04:57.210591  893941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:04:57.212115  893941 config.go:182] Loaded profile config "default-k8s-diff-port-129908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212300  893941 config.go:182] Loaded profile config "embed-certs-982151": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212438  893941 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:04:57.212567  893941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:04:57.238559  893941 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:04:57.238660  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.297433  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.286031992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.297580  893941 docker.go:319] overlay module found
	I1228 07:04:57.298969  893941 out.go:179] * Using the docker driver based on user configuration
	I1228 07:04:57.300613  893941 start.go:309] selected driver: docker
	I1228 07:04:57.300634  893941 start.go:928] validating driver "docker" against <nil>
	I1228 07:04:57.300764  893941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:04:57.373040  893941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 07:04:57.362531212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:04:57.373260  893941 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:04:57.374009  893941 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 07:04:57.374205  893941 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:04:57.375934  893941 out.go:179] * Using Docker driver with root privileges
	I1228 07:04:57.376909  893941 cni.go:84] Creating CNI manager for ""
	I1228 07:04:57.376990  893941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:57.377006  893941 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:04:57.377090  893941 start.go:353] cluster config:
	{Name:test-preload-dl-github-941249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-941249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s Rosetta:false}
	I1228 07:04:57.378318  893941 out.go:179] * Starting "test-preload-dl-github-941249" primary control-plane node in "test-preload-dl-github-941249" cluster
	I1228 07:04:57.379381  893941 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:04:57.380562  893941 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:04:57.381578  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.381667  893941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:04:57.404521  893941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:04:57.404541  893941 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 07:04:57.404651  893941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 07:04:57.404672  893941 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 07:04:57.404678  893941 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 07:04:57.404691  893941 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 07:04:57.694006  893941 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.694047  893941 cache.go:65] Caching tarball of preloaded images
	I1228 07:04:57.694272  893941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:04:57.696475  893941 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	I1228 07:04:54.156136  890975 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:04:54.156327  890975 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:04:54.156404  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.182897  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.182922  890975 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:04:54.182974  890975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:04:54.210854  890975 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:04:54.210878  890975 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:04:54.210886  890975 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:04:54.210976  890975 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:04:54.211041  890975 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:04:54.243309  890975 cni.go:84] Creating CNI manager for ""
	I1228 07:04:54.243337  890975 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:04:54.243357  890975 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:04:54.243390  890975 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:04:54.243560  890975 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:04:54.243637  890975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:04:54.252672  890975 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:04:54.252750  890975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:04:54.261653  890975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:04:54.274491  890975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:04:54.290148  890975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:04:54.303677  890975 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:04:54.307765  890975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:04:54.319329  890975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:04:54.407309  890975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:04:54.428994  890975 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:04:54.429018  890975 certs.go:195] generating shared ca certs ...
	I1228 07:04:54.429041  890975 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.429213  890975 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:04:54.429289  890975 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:04:54.429303  890975 certs.go:257] generating profile certs ...
	I1228 07:04:54.429385  890975 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:04:54.429403  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt with IP's: []
	I1228 07:04:54.548598  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt ...
	I1228 07:04:54.548635  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.crt: {Name:mkd08dc3defb41e6cb9598c503c16c96e90f0b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548840  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key ...
	I1228 07:04:54.548858  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key: {Name:mk221973691b93552b36f745648af0626098b6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.548982  890975 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:04:54.549002  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1228 07:04:54.752311  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 ...
	I1228 07:04:54.752343  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9: {Name:mk7cf8b3d021c8d256fc9c6b1dfeef22fd232313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752541  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 ...
	I1228 07:04:54.752561  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9: {Name:mkd1e03fc46bf83eaf251f6f96dac8ed16146eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.752695  890975 certs.go:382] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt
	I1228 07:04:54.752805  890975 certs.go:386] copying /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9 -> /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key
	I1228 07:04:54.752894  890975 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:04:54.752915  890975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt with IP's: []
	I1228 07:04:54.969153  890975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt ...
	I1228 07:04:54.969188  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt: {Name:mk4aac3247bad9d4fd84c69c062297c8ff05ca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969377  890975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key ...
	I1228 07:04:54.969399  890975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key: {Name:mke7a1ce2441ed9195ab16b7651e6bfb868f76fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:04:54.969599  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:04:54.969654  890975 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:04:54.969711  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:04:54.969757  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:04:54.969793  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:04:54.969829  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:04:54.969892  890975 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:04:54.970562  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:04:54.992409  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:04:55.010822  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:04:55.028237  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:04:55.045540  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:04:55.065840  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:04:55.085090  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:04:55.103279  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:04:55.121123  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:04:55.144357  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:04:55.163514  890975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:04:55.180645  890975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:04:55.193285  890975 ssh_runner.go:195] Run: openssl version
	I1228 07:04:55.199453  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.207377  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:04:55.214667  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218714  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.218773  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:04:55.252941  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:04:55.260873  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/555878.pem /etc/ssl/certs/51391683.0
	I1228 07:04:55.268681  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.275856  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:04:55.283139  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.286993  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.287045  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:04:55.321768  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.329608  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5558782.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:04:55.337022  890975 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.344669  890975 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:04:55.352248  890975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356339  890975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.356398  890975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:04:55.396110  890975 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:04:55.405045  890975 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:04:55.412931  890975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:04:55.417426  890975 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:04:55.417515  890975 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:04:55.417636  890975 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:04:55.429380  890975 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:04:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:04:55.429464  890975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:04:55.437187  890975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:04:55.445315  890975 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:04:55.445367  890975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:04:55.452965  890975 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:04:55.452982  890975 kubeadm.go:158] found existing configuration files:
	
	I1228 07:04:55.453032  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:04:55.460809  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:04:55.460879  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:04:55.468448  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:04:55.476086  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:04:55.476140  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:04:55.483556  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.490991  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:04:55.491116  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:04:55.498341  890975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:04:55.506479  890975 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:04:55.506521  890975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:04:55.513872  890975 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:04:55.617002  890975 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 07:04:55.678350  890975 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:04:57.697587  893941 preload.go:269] Downloading preload from https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:04:57.697605  893941 preload.go:347] getting checksum for preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4 from github api...
	I1228 07:04:58.317172  893941 preload.go:316] Got checksum from Github API "997f783aaecccd9e6aa0d5928dacc2df37b5c0c8f5b3ad6d0d15583ff23aa25f"
	I1228 07:04:58.317239  893941 download.go:114] Downloading: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=sha256:997f783aaecccd9e6aa0d5928dacc2df37b5c0c8f5b3ad6d0d15583ff23aa25f -> /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1228 07:05:02.983930  890975 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:05:02.984013  890975 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:05:02.984148  890975 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:05:02.984300  890975 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 07:05:02.984343  890975 kubeadm.go:319] OS: Linux
	I1228 07:05:02.984415  890975 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:05:02.984500  890975 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:05:02.984645  890975 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:05:02.984723  890975 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:05:02.984791  890975 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:05:02.984886  890975 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:05:02.984962  890975 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:05:02.985047  890975 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 07:05:02.985180  890975 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:05:02.985365  890975 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:05:02.985498  890975 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:05:02.985599  890975 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:05:02.986802  890975 out.go:252]   - Generating certificates and keys ...
	I1228 07:05:02.986903  890975 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:05:02.987008  890975 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:05:02.987113  890975 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:05:02.987244  890975 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:05:02.987342  890975 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:05:02.987418  890975 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:05:02.987502  890975 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:05:02.987746  890975 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190777] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1228 07:05:02.987942  890975 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:05:02.988106  890975 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190777] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1228 07:05:02.988226  890975 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:05:02.988342  890975 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:05:02.988442  890975 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:05:02.988535  890975 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:05:02.988611  890975 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:05:02.988696  890975 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:05:02.988775  890975 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:05:02.988876  890975 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:05:02.988957  890975 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:05:02.989073  890975 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:05:02.989185  890975 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:05:02.990478  890975 out.go:252]   - Booting up control plane ...
	I1228 07:05:02.990594  890975 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:05:02.990693  890975 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:05:02.990798  890975 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:05:02.990975  890975 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:05:02.991113  890975 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:05:02.991348  890975 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:05:02.991475  890975 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:05:02.991538  890975 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:05:02.991745  890975 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:05:02.991887  890975 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:05:02.991979  890975 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.750647ms
	I1228 07:05:02.992091  890975 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 07:05:02.992263  890975 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1228 07:05:02.992404  890975 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 07:05:02.992529  890975 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 07:05:02.992646  890975 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.008642557s
	I1228 07:05:02.992843  890975 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.897923826s
	I1228 07:05:02.992953  890975 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002199908s
	I1228 07:05:02.993125  890975 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 07:05:02.993306  890975 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 07:05:02.993405  890975 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 07:05:02.993630  890975 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-190777 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 07:05:02.993710  890975 kubeadm.go:319] [bootstrap-token] Using token: 5unf6z.xhmas7m9rqr9oe6w
	I1228 07:05:02.995585  890975 out.go:252]   - Configuring RBAC rules ...
	I1228 07:05:02.995712  890975 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 07:05:02.995826  890975 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 07:05:02.996001  890975 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 07:05:02.996158  890975 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 07:05:02.996308  890975 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 07:05:02.996421  890975 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 07:05:02.996585  890975 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 07:05:02.996656  890975 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 07:05:02.996740  890975 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 07:05:02.996753  890975 kubeadm.go:319] 
	I1228 07:05:02.996834  890975 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 07:05:02.996842  890975 kubeadm.go:319] 
	I1228 07:05:02.996972  890975 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 07:05:02.996993  890975 kubeadm.go:319] 
	I1228 07:05:02.997038  890975 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 07:05:02.997133  890975 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 07:05:02.997211  890975 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 07:05:02.997228  890975 kubeadm.go:319] 
	I1228 07:05:02.997302  890975 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 07:05:02.997307  890975 kubeadm.go:319] 
	I1228 07:05:02.997375  890975 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 07:05:02.997380  890975 kubeadm.go:319] 
	I1228 07:05:02.997469  890975 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 07:05:02.997578  890975 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 07:05:02.997678  890975 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 07:05:02.997683  890975 kubeadm.go:319] 
	I1228 07:05:02.997805  890975 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 07:05:02.997916  890975 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 07:05:02.997921  890975 kubeadm.go:319] 
	I1228 07:05:02.998043  890975 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5unf6z.xhmas7m9rqr9oe6w \
	I1228 07:05:02.998180  890975 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f6f27d4b53ddb491c7810aae1307211acdec58839383cd713f9c54ba838bcc3 \
	I1228 07:05:02.998210  890975 kubeadm.go:319] 	--control-plane 
	I1228 07:05:02.998228  890975 kubeadm.go:319] 
	I1228 07:05:02.998345  890975 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 07:05:02.998361  890975 kubeadm.go:319] 
	I1228 07:05:02.998476  890975 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5unf6z.xhmas7m9rqr9oe6w \
	I1228 07:05:02.998644  890975 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f6f27d4b53ddb491c7810aae1307211acdec58839383cd713f9c54ba838bcc3 
	I1228 07:05:02.998655  890975 cni.go:84] Creating CNI manager for ""
	I1228 07:05:02.998663  890975 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:03.000804  890975 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	86b3e4383cbb0       6e38f40d628db       3 seconds ago        Running             storage-provisioner       2                   7a7731f617953       storage-provisioner                          kube-system
	a07751a2d50bc       07655ddf2eebe       59 seconds ago       Running             kubernetes-dashboard      0                   775e178b603a1       kubernetes-dashboard-b84665fb8-vkxng         kubernetes-dashboard
	aeef778d320d0       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   7a7731f617953       storage-provisioner                          kube-system
	896bae11feabd       4921d7a6dffa9       About a minute ago   Running             kindnet-cni               1                   be78777c6580d       kindnet-fchxm                                kube-system
	5cf5c73f6a648       56cc512116c8f       About a minute ago   Running             busybox                   1                   2cc8d4c88ba96       busybox                                      default
	9465d0758efc1       32652ff1bbe6b       About a minute ago   Running             kube-proxy                1                   b3c2fc0ca2a1a       kube-proxy-z29fh                             kube-system
	d8c18facb511f       aa5e3ebc0dfed       About a minute ago   Running             coredns                   1                   7525503abb724       coredns-7d764666f9-s8grm                     kube-system
	e86500480895b       550794e3b12ac       About a minute ago   Running             kube-scheduler            1                   6c2597063c002       kube-scheduler-embed-certs-982151            kube-system
	626f204bd4544       5c6acd67e9cd1       About a minute ago   Running             kube-apiserver            1                   529dd4d36e199       kube-apiserver-embed-certs-982151            kube-system
	10281dfa1e827       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   1                   121c677cadb20       kube-controller-manager-embed-certs-982151   kube-system
	53cabd3183dc8       0a108f7189562       About a minute ago   Running             etcd                      1                   4ec925cf36e39       etcd-embed-certs-982151                      kube-system
	5ce7735e4d622       56cc512116c8f       About a minute ago   Exited              busybox                   0                   128f064de924f       busybox                                      default
	0d989dabc06a6       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   250d19fac4911       coredns-7d764666f9-s8grm                     kube-system
	f43a96b841933       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni               0                   86aff389319ac       kindnet-fchxm                                kube-system
	1743ad5dae239       32652ff1bbe6b       About a minute ago   Exited              kube-proxy                0                   dc6a3ee7fdb8e       kube-proxy-z29fh                             kube-system
	e50d94ac38fe6       550794e3b12ac       2 minutes ago        Exited              kube-scheduler            0                   a8412b51879d6       kube-scheduler-embed-certs-982151            kube-system
	fbbd6e7d61423       2c9a4b058bd7e       2 minutes ago        Exited              kube-controller-manager   0                   86367ae2ea31d       kube-controller-manager-embed-certs-982151   kube-system
	ab1befada8516       0a108f7189562       2 minutes ago        Exited              etcd                      0                   122075847f366       etcd-embed-certs-982151                      kube-system
	f5a44f2a692e9       5c6acd67e9cd1       2 minutes ago        Exited              kube-apiserver            0                   a75019fc3cfda       kube-apiserver-embed-certs-982151            kube-system
	
	
	==> containerd <==
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.197116636Z" level=info msg="TearDown network for sandbox \"663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.199498613Z" level=info msg="Ensure that sandbox 663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d in task-service has been cleanup successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.202936678Z" level=info msg="RemovePodSandbox \"663d50842dadc2906c30793af9e0641ed8692feba35e723ce270f98b4e5ff32d\" returns successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.204076967Z" level=info msg="StopPodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225291791Z" level=info msg="TearDown network for sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225390675Z" level=info msg="StopPodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" returns successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225834525Z" level=info msg="RemovePodSandbox for \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.225890925Z" level=info msg="Forcibly stopping sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\""
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.245307621Z" level=info msg="TearDown network for sandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.247596419Z" level=info msg="Ensure that sandbox 3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f in task-service has been cleanup successfully"
	Dec 28 07:04:54 embed-certs-982151 containerd[447]: time="2025-12-28T07:04:54.253249020Z" level=info msg="RemovePodSandbox \"3cea5cc65fdb48708f8547572e3736efbb2b0237012dac804ca8f9f347b9577f\" returns successfully"
	Dec 28 07:05:01 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:01.658786115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.696536522Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.697773154Z" level=info msg="CreateContainer within sandbox \"7a7731f6179533162dee18d14cffc1cd413a32cbc0a84e413c9b8216466d6e98\" for container name:\"storage-provisioner\"  attempt:2"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.704786851Z" level=info msg="Container 86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.712831950Z" level=info msg="CreateContainer within sandbox \"7a7731f6179533162dee18d14cffc1cd413a32cbc0a84e413c9b8216466d6e98\" for name:\"storage-provisioner\"  attempt:2 returns container id \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.714384274Z" level=info msg="StartContainer for \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.717453536Z" level=info msg="connecting to shim 86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2" address="unix:///run/containerd/s/bb7ba940f582b0114a6cc3b069f73c3509f762c203b4565bb7cb85eb86f52f7f" protocol=ttrpc version=3
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.751349884Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.752793061Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.752886170Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.753958752Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.788726317Z" level=info msg="StartContainer for \"86b3e4383cbb024b367f05d6b49a148a0e14144de934537bd7cabb1a8a4f74c2\" returns successfully"
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.807147660Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:05:02 embed-certs-982151 containerd[447]: time="2025-12-28T07:05:02.807314891Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               embed-certs-982151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-982151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-982151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_03_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:02:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-982151
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:02:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:05:01 +0000   Sun, 28 Dec 2025 07:03:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-982151
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                4eb3dbe3-a90b-4981-83c9-1c52100b3e2a
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-7d764666f9-s8grm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     119s
	  kube-system                 etcd-embed-certs-982151                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-fchxm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-embed-certs-982151             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-embed-certs-982151    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-z29fh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-embed-certs-982151             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 metrics-server-5d785b57d4-xsks7               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-6h2qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vkxng          0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m    node-controller  Node embed-certs-982151 event: Registered Node embed-certs-982151 in Controller
	  Normal  RegisteredNode  66s   node-controller  Node embed-certs-982151 event: Registered Node embed-certs-982151 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:06 up  3:47,  0 user,  load average: 3.87, 3.19, 10.56
	Linux embed-certs-982151 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519402    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519566    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.519884    2437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.526964    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-982151\" already exists" pod="kube-system/etcd-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.527076    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-982151" containerName="etcd"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.529929    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-982151\" already exists" pod="kube-system/kube-controller-manager-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.530044    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-982151" containerName="kube-controller-manager"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.530951    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-982151\" already exists" pod="kube-system/kube-scheduler-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531042    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-982151" containerName="kube-scheduler"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531058    2437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-982151\" already exists" pod="kube-system/kube-apiserver-embed-certs-982151"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.531138    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-982151" containerName="kube-apiserver"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: I1228 07:05:02.693654    2437 scope.go:122] "RemoveContainer" containerID="aeef778d320d04740ce6113b65d6b4d3a0ce00b0f5f0d9dc72147b95e4070699"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753166    2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753262    2437 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753625    2437 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-xsks7_kube-system(81e3f749-eed9-432c-89e9-f4548e1b7e3f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" logger="UnhandledError"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.753722    2437 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-xsks7" podUID="81e3f749-eed9-432c-89e9-f4548e1b7e3f"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807512    2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807592    2437 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807873    2437 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-6h2qr_kubernetes-dashboard(710fd7c1-d455-42cb-b399-5701310a27a7): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:05:02 embed-certs-982151 kubelet[2437]: E1228 07:05:02.807946    2437 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-6h2qr" podUID="710fd7c1-d455-42cb-b399-5701310a27a7"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.527822    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-982151" containerName="kube-apiserver"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.527998    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-982151" containerName="kube-controller-manager"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.528349    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-982151" containerName="etcd"
	Dec 28 07:05:03 embed-certs-982151 kubelet[2437]: E1228 07:05:03.528443    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-982151" containerName="kube-scheduler"
	Dec 28 07:05:04 embed-certs-982151 kubelet[2437]: E1228 07:05:04.530516    2437 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-982151" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-982151 -n embed-certs-982151
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-982151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr: exit status 1 (83.506946ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-xsks7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6h2qr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-982151 describe pod metrics-server-5d785b57d4-xsks7 dashboard-metrics-scraper-867fb5f87b-6h2qr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-190777 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190777 -n newest-cni-190777
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190777 -n newest-cni-190777: exit status 2 (322.568009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190777 -n newest-cni-190777
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190777 -n newest-cni-190777: exit status 2 (316.467617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-190777 --alsologtostderr -v=1
E1228 07:05:21.496530  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190777 -n newest-cni-190777
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190777 -n newest-cni-190777
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-190777
helpers_test.go:244: (dbg) docker inspect newest-cni-190777:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810",
	        "Created": "2025-12-28T07:04:48.252039139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 900206,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:05:10.613566305Z",
	            "FinishedAt": "2025-12-28T07:05:09.761787399Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/hosts",
	        "LogPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810-json.log",
	        "Name": "/newest-cni-190777",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-190777:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-190777",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810",
	                "LowerDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-190777",
	                "Source": "/var/lib/docker/volumes/newest-cni-190777/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-190777",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-190777",
	                "name.minikube.sigs.k8s.io": "newest-cni-190777",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "609c3a3286cd90dd8023fa30a2859c087b4203a524be3861dd0548b4b9498d19",
	            "SandboxKey": "/var/run/docker/netns/609c3a3286cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-190777": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "32fc2d20685bfe29f502fec10c21e34a394a9026fef35b9d3ac0d29f1f26e5d6",
	                    "EndpointID": "3a90b7b287dbfc1e409a94866ba3e083f8b98a7244c12ea14f0a419b03b2470b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:09:f7:b2:35:6c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-190777",
	                        "f1dfa2626e9c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190777 -n newest-cni-190777
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190777 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249     │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p default-k8s-diff-port-129908                                                                                                                                                                                                                     │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p test-preload-dl-github-941249                                                                                                                                                                                                                    │ test-preload-dl-github-941249     │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832630 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                 │ test-preload-dl-gcs-cached-832630 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832630                                                                                                                                                                                                                │ test-preload-dl-gcs-cached-832630 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p embed-certs-982151                                                                                                                                                                                                                               │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p default-k8s-diff-port-129908                                                                                                                                                                                                                     │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ stop    │ -p newest-cni-190777 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p embed-certs-982151                                                                                                                                                                                                                               │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-190777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ image   │ newest-cni-190777 image list --format=json                                                                                                                                                                                                          │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ pause   │ -p newest-cni-190777 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p newest-cni-190777 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:05:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:05:10.393806  900004 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:10.393948  900004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:10.393961  900004 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:10.393967  900004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:10.394165  900004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:05:10.394646  900004 out.go:368] Setting JSON to false
	I1228 07:05:10.395667  900004 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13654,"bootTime":1766891856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:05:10.395732  900004 start.go:143] virtualization: kvm guest
	I1228 07:05:10.397602  900004 out.go:179] * [newest-cni-190777] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:05:10.398667  900004 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:10.398694  900004 notify.go:221] Checking for updates...
	I1228 07:05:10.400682  900004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:10.401741  900004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:10.402739  900004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:05:10.403892  900004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:05:10.404947  900004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:05:10.406605  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:10.407410  900004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:10.431516  900004 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:05:10.431601  900004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:10.487374  900004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-28 07:05:10.478009878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:05:10.487473  900004 docker.go:319] overlay module found
	I1228 07:05:10.489731  900004 out.go:179] * Using the docker driver based on existing profile
	I1228 07:05:10.490919  900004 start.go:309] selected driver: docker
	I1228 07:05:10.490936  900004 start.go:928] validating driver "docker" against &{Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:10.491043  900004 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:05:10.491571  900004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:10.545533  900004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-28 07:05:10.535980615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:05:10.545878  900004 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:05:10.545926  900004 cni.go:84] Creating CNI manager for ""
	I1228 07:05:10.546013  900004 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:10.546068  900004 start.go:353] cluster config:
	{Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:10.547681  900004 out.go:179] * Starting "newest-cni-190777" primary control-plane node in "newest-cni-190777" cluster
	I1228 07:05:10.548878  900004 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:05:10.549953  900004 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:05:10.550907  900004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:10.550944  900004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:05:10.550962  900004 cache.go:65] Caching tarball of preloaded images
	I1228 07:05:10.551021  900004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:05:10.551064  900004 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:05:10.551079  900004 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:05:10.551278  900004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/config.json ...
	I1228 07:05:10.570416  900004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:05:10.570433  900004 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:05:10.570462  900004 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:05:10.570512  900004 start.go:360] acquireMachinesLock for newest-cni-190777: {Name:mkb88475b99b1170872fe76b7e2a784e228c1e71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:05:10.570593  900004 start.go:364] duration metric: took 45.554µs to acquireMachinesLock for "newest-cni-190777"
	I1228 07:05:10.570614  900004 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:05:10.570625  900004 fix.go:54] fixHost starting: 
	I1228 07:05:10.570858  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:10.588561  900004 fix.go:112] recreateIfNeeded on newest-cni-190777: state=Stopped err=<nil>
	W1228 07:05:10.588603  900004 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:05:10.590184  900004 out.go:252] * Restarting existing docker container for "newest-cni-190777" ...
	I1228 07:05:10.590258  900004 cli_runner.go:164] Run: docker start newest-cni-190777
	I1228 07:05:10.826355  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:10.843692  900004 kic.go:430] container "newest-cni-190777" state is running.
	I1228 07:05:10.844134  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:10.862082  900004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/config.json ...
	I1228 07:05:10.862382  900004 machine.go:94] provisionDockerMachine start ...
	I1228 07:05:10.862463  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:10.880241  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:10.880538  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:10.880554  900004 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:05:10.881370  900004 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51356->127.0.0.1:33143: read: connection reset by peer
	I1228 07:05:14.005566  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-190777
	
	I1228 07:05:14.005597  900004 ubuntu.go:182] provisioning hostname "newest-cni-190777"
	I1228 07:05:14.005674  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.023817  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:14.024050  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:14.024063  900004 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-190777 && echo "newest-cni-190777" | sudo tee /etc/hostname
	I1228 07:05:14.156389  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-190777
	
	I1228 07:05:14.156486  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.173835  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:14.174135  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:14.174157  900004 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190777/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:05:14.295865  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:05:14.295893  900004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:05:14.295926  900004 ubuntu.go:190] setting up certificates
	I1228 07:05:14.295947  900004 provision.go:84] configureAuth start
	I1228 07:05:14.296023  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:14.312985  900004 provision.go:143] copyHostCerts
	I1228 07:05:14.313064  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:05:14.313088  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:05:14.313176  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:05:14.313329  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:05:14.313342  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:05:14.313378  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:05:14.313460  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:05:14.313469  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:05:14.313497  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:05:14.313569  900004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190777 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-190777]
	I1228 07:05:14.447685  900004 provision.go:177] copyRemoteCerts
	I1228 07:05:14.447764  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:05:14.447811  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.465416  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.557120  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:05:14.575088  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:05:14.592738  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:05:14.609700  900004 provision.go:87] duration metric: took 313.718646ms to configureAuth
	I1228 07:05:14.609726  900004 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:05:14.609902  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:14.609917  900004 machine.go:97] duration metric: took 3.747516141s to provisionDockerMachine
	I1228 07:05:14.609928  900004 start.go:293] postStartSetup for "newest-cni-190777" (driver="docker")
	I1228 07:05:14.609941  900004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:05:14.609996  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:05:14.610045  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.628822  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.719656  900004 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:05:14.723197  900004 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:05:14.723237  900004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:05:14.723249  900004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:05:14.723298  900004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:05:14.723372  900004 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:05:14.723465  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:05:14.730931  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:05:14.747882  900004 start.go:296] duration metric: took 137.937725ms for postStartSetup
	I1228 07:05:14.747974  900004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:05:14.748019  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.765698  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.854348  900004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:05:14.858974  900004 fix.go:56] duration metric: took 4.288344457s for fixHost
	I1228 07:05:14.859005  900004 start.go:83] releasing machines lock for "newest-cni-190777", held for 4.288397064s
	I1228 07:05:14.859079  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:14.876492  900004 ssh_runner.go:195] Run: cat /version.json
	I1228 07:05:14.876536  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.876566  900004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:05:14.876676  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.894678  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.894947  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:15.036051  900004 ssh_runner.go:195] Run: systemctl --version
	I1228 07:05:15.042856  900004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:05:15.047548  900004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:05:15.047644  900004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:05:15.055474  900004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:05:15.055497  900004 start.go:496] detecting cgroup driver to use...
	I1228 07:05:15.055526  900004 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:05:15.055574  900004 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:05:15.070948  900004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:05:15.082930  900004 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:05:15.082973  900004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:05:15.096547  900004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:05:15.107966  900004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:05:15.183858  900004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:05:15.261886  900004 docker.go:234] disabling docker service ...
	I1228 07:05:15.261957  900004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:05:15.275857  900004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:05:15.287914  900004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:05:15.365210  900004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:05:15.444639  900004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:05:15.457558  900004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:05:15.472339  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:05:15.481165  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:05:15.489818  900004 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:05:15.489883  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:05:15.498431  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:15.506828  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:05:15.515271  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:15.523537  900004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:05:15.531367  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:05:15.539547  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:05:15.547913  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:05:15.556265  900004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:05:15.563253  900004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:05:15.570251  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:15.645715  900004 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:05:15.747254  900004 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:05:15.747320  900004 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:05:15.751677  900004 start.go:574] Will wait 60s for crictl version
	I1228 07:05:15.751732  900004 ssh_runner.go:195] Run: which crictl
	I1228 07:05:15.755366  900004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:05:15.780374  900004 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:05:15.780438  900004 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:15.803257  900004 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:15.827968  900004 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:05:15.829178  900004 cli_runner.go:164] Run: docker network inspect newest-cni-190777 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:05:15.846435  900004 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1228 07:05:15.850689  900004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:15.862468  900004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 07:05:15.863440  900004 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:05:15.863578  900004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:15.863641  900004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:15.890088  900004 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:15.890117  900004 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:05:15.890183  900004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:15.914187  900004 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:15.914212  900004 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:05:15.914233  900004 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:05:15.914359  900004 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:05:15.914435  900004 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:05:15.940000  900004 cni.go:84] Creating CNI manager for ""
	I1228 07:05:15.940026  900004 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:15.940048  900004 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:05:15.940080  900004 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:05:15.940205  900004 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:05:15.940354  900004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:05:15.948654  900004 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:05:15.948707  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:05:15.956396  900004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:05:15.968457  900004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:05:15.980652  900004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:05:15.992746  900004 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:05:15.996237  900004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:16.006135  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:16.084895  900004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:05:16.115569  900004 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:05:16.115603  900004 certs.go:195] generating shared ca certs ...
	I1228 07:05:16.115629  900004 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.115803  900004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:05:16.115886  900004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:05:16.115901  900004 certs.go:257] generating profile certs ...
	I1228 07:05:16.116024  900004 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:05:16.116102  900004 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:05:16.116159  900004 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:05:16.116331  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:05:16.116380  900004 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:05:16.116390  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:05:16.116426  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:05:16.116459  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:05:16.116493  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:05:16.116552  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:05:16.117585  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:05:16.137146  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:05:16.156405  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:05:16.175078  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:05:16.197399  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:05:16.219050  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:05:16.236681  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:05:16.253553  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:05:16.270285  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:05:16.287095  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:05:16.306160  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:05:16.323252  900004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:05:16.335548  900004 ssh_runner.go:195] Run: openssl version
	I1228 07:05:16.342164  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.349195  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:05:16.356663  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.360426  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.360481  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.394407  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:05:16.402398  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.409706  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:05:16.417048  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.420805  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.420859  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.455924  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:05:16.464153  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.471456  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:05:16.478780  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.482612  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.482669  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.515790  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:05:16.523108  900004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:05:16.526664  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:05:16.561497  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:05:16.597107  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:05:16.634177  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:05:16.675473  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:05:16.727906  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:05:16.774799  900004 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:16.774959  900004 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:05:16.794660  900004 cri.go:83] list returned 5 containers
	I1228 07:05:16.794735  900004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:05:16.805252  900004 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:05:16.805275  900004 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:05:16.805330  900004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:05:16.815987  900004 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:05:16.816640  900004 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-190777" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:16.816821  900004 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-190777" cluster setting kubeconfig missing "newest-cni-190777" context setting]
	I1228 07:05:16.817241  900004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.818869  900004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:05:16.830867  900004 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1228 07:05:16.830909  900004 kubeadm.go:602] duration metric: took 25.625265ms to restartPrimaryControlPlane
	I1228 07:05:16.830919  900004 kubeadm.go:403] duration metric: took 56.134163ms to StartCluster
	I1228 07:05:16.830935  900004 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.831013  900004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:16.832276  900004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.832530  900004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:05:16.832712  900004 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:05:16.832822  900004 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-190777"
	I1228 07:05:16.832851  900004 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-190777"
	W1228 07:05:16.832865  900004 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:05:16.832860  900004 addons.go:70] Setting default-storageclass=true in profile "newest-cni-190777"
	I1228 07:05:16.832896  900004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190777"
	I1228 07:05:16.832900  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:16.832910  900004 addons.go:70] Setting metrics-server=true in profile "newest-cni-190777"
	I1228 07:05:16.832928  900004 addons.go:239] Setting addon metrics-server=true in "newest-cni-190777"
	W1228 07:05:16.832935  900004 addons.go:248] addon metrics-server should already be in state true
	I1228 07:05:16.832899  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.832952  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.833290  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833443  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833502  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833556  900004 addons.go:70] Setting dashboard=true in profile "newest-cni-190777"
	I1228 07:05:16.833581  900004 addons.go:239] Setting addon dashboard=true in "newest-cni-190777"
	W1228 07:05:16.833590  900004 addons.go:248] addon dashboard should already be in state true
	I1228 07:05:16.833647  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.834090  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.836339  900004 out.go:179] * Verifying Kubernetes components...
	I1228 07:05:16.838515  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:16.860964  900004 addons.go:239] Setting addon default-storageclass=true in "newest-cni-190777"
	W1228 07:05:16.860993  900004 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:05:16.861110  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.861843  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.869633  900004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:05:16.870489  900004 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:05:16.871027  900004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:05:16.871055  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:05:16.871140  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.871854  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:05:16.871879  900004 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:05:16.871943  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.872103  900004 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:05:16.873809  900004 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:05:16.874851  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:05:16.874871  900004 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:05:16.874934  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.904859  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.905475  900004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:05:16.905503  900004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:05:16.905566  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.905981  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.909327  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.936278  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:17.007706  900004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:05:17.024849  900004 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:05:17.024932  900004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:05:17.027968  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:05:17.027988  900004 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:05:17.029114  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:05:17.029131  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:05:17.033257  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:05:17.045286  900004 api_server.go:72] duration metric: took 212.718533ms to wait for apiserver process to appear ...
	I1228 07:05:17.045321  900004 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:05:17.045353  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:17.047627  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:05:17.047649  900004 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:05:17.049866  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:05:17.049890  900004 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:05:17.053971  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:05:17.066659  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:05:17.066683  900004 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:05:17.072603  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:05:17.073069  900004 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:05:17.084118  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:05:17.084135  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:05:17.093327  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:05:17.102101  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:05:17.102123  900004 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:05:17.122741  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:05:17.122764  900004 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:05:17.137698  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:05:17.137721  900004 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:05:17.152247  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:05:17.152267  900004 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:05:17.165972  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:05:17.165994  900004 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:05:17.178845  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:05:18.225381  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 07:05:18.225409  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 07:05:18.225427  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:18.234327  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 07:05:18.234432  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 07:05:18.546065  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:18.551407  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:05:18.551435  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:05:18.774476  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.741180341s)
	I1228 07:05:18.774520  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.720526625s)
	I1228 07:05:18.790482  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69711601s)
	I1228 07:05:18.790524  900004 addons.go:495] Verifying addon metrics-server=true in "newest-cni-190777"
	I1228 07:05:18.790593  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.611708535s)
	I1228 07:05:18.792211  900004 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-190777 addons enable metrics-server
	
	I1228 07:05:18.793426  900004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1228 07:05:18.794524  900004 addons.go:530] duration metric: took 1.961820285s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1228 07:05:19.045848  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:19.050039  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:05:19.050068  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:05:19.546460  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:19.551281  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1228 07:05:19.552473  900004 api_server.go:141] control plane version: v1.35.0
	I1228 07:05:19.552502  900004 api_server.go:131] duration metric: took 2.507172453s to wait for apiserver health ...
	I1228 07:05:19.552513  900004 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:05:19.556813  900004 system_pods.go:59] 9 kube-system pods found
	I1228 07:05:19.556861  900004 system_pods.go:61] "coredns-7d764666f9-4jmjw" [3e1ad652-5902-4ab5-a6f3-d7a1b31f4bbe] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556882  900004 system_pods.go:61] "etcd-newest-cni-190777" [27a1a0b7-aa37-43ea-b386-98ecd563f757] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:05:19.556906  900004 system_pods.go:61] "kindnet-zzjsv" [f2a105ad-3a98-4a28-9085-808241a62768] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:05:19.556926  900004 system_pods.go:61] "kube-apiserver-newest-cni-190777" [c623afdc-762c-422b-bc01-6835a086e78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:05:19.556940  900004 system_pods.go:61] "kube-controller-manager-newest-cni-190777" [9716ceb3-ce70-4935-9688-8bbaa305f037] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:05:19.556953  900004 system_pods.go:61] "kube-proxy-jtmkx" [ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:05:19.556962  900004 system_pods.go:61] "kube-scheduler-newest-cni-190777" [13d890be-e9c3-4161-9fad-7b97d2d8ba26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:05:19.556970  900004 system_pods.go:61] "metrics-server-5d785b57d4-89lc5" [29e0f7b1-e316-43e9-bb28-3427c18190a2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556985  900004 system_pods.go:61] "storage-provisioner" [8af786ab-27b8-44bc-80de-85dcff282c60] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556993  900004 system_pods.go:74] duration metric: took 4.474585ms to wait for pod list to return data ...
	I1228 07:05:19.557003  900004 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:05:19.559796  900004 default_sa.go:45] found service account: "default"
	I1228 07:05:19.559816  900004 default_sa.go:55] duration metric: took 2.806415ms for default service account to be created ...
	I1228 07:05:19.559827  900004 kubeadm.go:587] duration metric: took 2.727266199s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:05:19.559846  900004 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:05:19.562689  900004 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:05:19.562708  900004 node_conditions.go:123] node cpu capacity is 8
	I1228 07:05:19.562722  900004 node_conditions.go:105] duration metric: took 2.871443ms to run NodePressure ...
	I1228 07:05:19.562733  900004 start.go:242] waiting for startup goroutines ...
	I1228 07:05:19.562740  900004 start.go:247] waiting for cluster config update ...
	I1228 07:05:19.562751  900004 start.go:256] writing updated cluster config ...
	I1228 07:05:19.563004  900004 ssh_runner.go:195] Run: rm -f paused
	I1228 07:05:19.614643  900004 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:05:19.616905  900004 out.go:179] * Done! kubectl is now configured to use "newest-cni-190777" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	83c9f5179f673       32652ff1bbe6b       3 seconds ago       Running             kube-proxy                1                   f644bdb353597       kube-proxy-jtmkx                            kube-system
	6dc1fafeed576       2c9a4b058bd7e       6 seconds ago       Running             kube-controller-manager   1                   7ffd34da11262       kube-controller-manager-newest-cni-190777   kube-system
	6c11d536061f6       550794e3b12ac       6 seconds ago       Running             kube-scheduler            1                   7a619959cadd7       kube-scheduler-newest-cni-190777            kube-system
	0d35837bfc00c       5c6acd67e9cd1       6 seconds ago       Running             kube-apiserver            1                   e008cbe31b34c       kube-apiserver-newest-cni-190777            kube-system
	e262a74ab75ab       0a108f7189562       6 seconds ago       Running             etcd                      1                   668433a17b14d       etcd-newest-cni-190777                      kube-system
	0888a08e04899       32652ff1bbe6b       15 seconds ago      Exited              kube-proxy                0                   6d945055cbfba       kube-proxy-jtmkx                            kube-system
	bf133646ee864       550794e3b12ac       25 seconds ago      Exited              kube-scheduler            0                   10e6f2846895c       kube-scheduler-newest-cni-190777            kube-system
	a08faf32774b6       2c9a4b058bd7e       25 seconds ago      Exited              kube-controller-manager   0                   3925c7a01f4b2       kube-controller-manager-newest-cni-190777   kube-system
	02610c2e73d55       0a108f7189562       25 seconds ago      Exited              etcd                      0                   b6cbf6ad192a5       etcd-newest-cni-190777                      kube-system
	1780088a12194       5c6acd67e9cd1       25 seconds ago      Exited              kube-apiserver            0                   779df21e00a15       kube-apiserver-newest-cni-190777            kube-system
	
	
	==> containerd <==
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.505633575Z" level=info msg="StopPodSandbox for \"6d945055cbfba465592d2a9dd9bfa4f1160b84c45ac14af2ad8bf4481d078ce3\" returns successfully"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.508460743Z" level=info msg="RunPodSandbox for name:\"kube-proxy-jtmkx\"  uid:\"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\"  namespace:\"kube-system\"  attempt:1"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.509960590Z" level=info msg="connecting to shim 6b963351520a6acb0def08068674098b08393371dfd60bf0f0855c11539c8412" address="unix:///run/containerd/s/8c2e40632c6cd3bc305ea2cc51992cb1707a8d1d198d510e7bfedfa0df479661" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.524873955Z" level=info msg="connecting to shim f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5" address="unix:///run/containerd/s/302ed862e7a7f4f84287374b5f7b7ad43a021f2a35f41dcab7b06853a5a846bb" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.566536806Z" level=info msg="RunPodSandbox for name:\"kube-proxy-jtmkx\"  uid:\"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\"  namespace:\"kube-system\"  attempt:1 returns sandbox id \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.569777977Z" level=info msg="CreateContainer within sandbox \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\" for container name:\"kube-proxy\"  attempt:1"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.577792742Z" level=info msg="Container 83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.586045900Z" level=info msg="CreateContainer within sandbox \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\" for name:\"kube-proxy\"  attempt:1 returns container id \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.586498613Z" level=info msg="StartContainer for \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.587959824Z" level=info msg="connecting to shim 83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178" address="unix:///run/containerd/s/302ed862e7a7f4f84287374b5f7b7ad43a021f2a35f41dcab7b06853a5a846bb" protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.664782626Z" level=info msg="StartContainer for \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\" returns successfully"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.764922454Z" level=info msg="RunPodSandbox for name:\"kindnet-zzjsv\"  uid:\"f2a105ad-3a98-4a28-9085-808241a62768\"  namespace:\"kube-system\"  attempt:1 returns sandbox id \"6b963351520a6acb0def08068674098b08393371dfd60bf0f0855c11539c8412\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.767108366Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 28 07:05:20 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:20.613425988Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88: active requests=1, bytes read=5415"
	Dec 28 07:05:20 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:20.617413317Z" level=error msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\" failed" error="rpc error: code = Canceled desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae\": context canceled"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.790645060Z" level=info msg="StopPodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.793898409Z" level=info msg="TearDown network for sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.794046081Z" level=info msg="StopPodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" returns successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.796930832Z" level=info msg="RemovePodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.796982837Z" level=info msg="Forcibly stopping sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.797497688Z" level=info msg="TearDown network for sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.800574979Z" level=info msg="Ensure that sandbox dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87 in task-service has been cleanup successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.804600429Z" level=info msg="RemovePodSandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" returns successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.978105488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:23 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:23.086007307Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	
	
	==> describe nodes <==
	Name:               newest-cni-190777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-190777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-190777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:04:59 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-190777
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-190777
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                14c0e987-b2e2-44d1-9c76-f5d4ac4e5339
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-190777                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21s
	  kube-system                 kindnet-zzjsv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16s
	  kube-system                 kube-apiserver-newest-cni-190777             250m (3%)     0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-controller-manager-newest-cni-190777    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-proxy-jtmkx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-scheduler-newest-cni-190777             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  17s   node-controller  Node newest-cni-190777 event: Registered Node newest-cni-190777 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-190777 event: Registered Node newest-cni-190777 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:23 up  3:47,  0 user,  load average: 3.39, 3.11, 10.42
	Linux newest-cni-190777 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996300    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90fca33f03447de0d534ebe1b5b1d064-kubeconfig\") pod \"kube-scheduler-newest-cni-190777\" (UID: \"90fca33f03447de0d534ebe1b5b1d064\") " pod="kube-system/kube-scheduler-newest-cni-190777"
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996324    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/dc06b4198e1df576332d4573d7531cc4-etcd-certs\") pod \"etcd-newest-cni-190777\" (UID: \"dc06b4198e1df576332d4573d7531cc4\") " pod="kube-system/etcd-newest-cni-190777"
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996343    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7bca5df51a9084bab3a48a1f8022e54-ca-certs\") pod \"kube-apiserver-newest-cni-190777\" (UID: \"f7bca5df51a9084bab3a48a1f8022e54\") " pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996367    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7bca5df51a9084bab3a48a1f8022e54-k8s-certs\") pod \"kube-apiserver-newest-cni-190777\" (UID: \"f7bca5df51a9084bab3a48a1f8022e54\") " pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996389    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7bca5df51a9084bab3a48a1f8022e54-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-190777\" (UID: \"f7bca5df51a9084bab3a48a1f8022e54\") " pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996416    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f33664d46d270a46d04d25a233fe0f6-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-190777\" (UID: \"2f33664d46d270a46d04d25a233fe0f6\") " pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.777510    1629 apiserver.go:52] "Watching apiserver"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.791295    1629 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801285    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-cni-cfg\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801331    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-xtables-lock\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801411    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-lib-modules\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801434    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6-lib-modules\") pod \"kube-proxy-jtmkx\" (UID: \"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\") " pod="kube-system/kube-proxy-jtmkx"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801466    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6-xtables-lock\") pod \"kube-proxy-jtmkx\" (UID: \"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\") " pod="kube-system/kube-proxy-jtmkx"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.845992    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846126    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846011    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846234    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.852210    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-190777\" already exists" pod="kube-system/kube-scheduler-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.852337    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-190777" containerName="kube-scheduler"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853536    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190777\" already exists" pod="kube-system/etcd-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853620    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-190777" containerName="etcd"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853701    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-190777\" already exists" pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853837    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-190777" containerName="kube-apiserver"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.854026    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-190777\" already exists" pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.854102    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-190777" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190777 -n newest-cni-190777
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-190777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz: exit status 1 (62.761701ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4jmjw" not found
	Error from server (NotFound): pods "kindnet-zzjsv" not found
	Error from server (NotFound): pods "metrics-server-5d785b57d4-89lc5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ngt4g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-ffwqz" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-190777
helpers_test.go:244: (dbg) docker inspect newest-cni-190777:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810",
	        "Created": "2025-12-28T07:04:48.252039139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 900206,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:05:10.613566305Z",
	            "FinishedAt": "2025-12-28T07:05:09.761787399Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/hosts",
	        "LogPath": "/var/lib/docker/containers/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810/f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810-json.log",
	        "Name": "/newest-cni-190777",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-190777:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-190777",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1dfa2626e9c4d62ad6580aac4ba29899cf7a30d2cb0196b45bdaa28beed1810",
	                "LowerDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18-init/diff:/var/lib/docker/overlay2/dfc7a4c580b7be84b0f83410b7478c6f6aa3f00b996556623ab9129bd6527422/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4abbebac04c919ec9bbb3d3872387ff87123e787dc203447f7d5777236336d18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-190777",
	                "Source": "/var/lib/docker/volumes/newest-cni-190777/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-190777",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-190777",
	                "name.minikube.sigs.k8s.io": "newest-cni-190777",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "609c3a3286cd90dd8023fa30a2859c087b4203a524be3861dd0548b4b9498d19",
	            "SandboxKey": "/var/run/docker/netns/609c3a3286cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-190777": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "32fc2d20685bfe29f502fec10c21e34a394a9026fef35b9d3ac0d29f1f26e5d6",
	                    "EndpointID": "3a90b7b287dbfc1e409a94866ba3e083f8b98a7244c12ea14f0a419b03b2470b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:09:f7:b2:35:6c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-190777",
	                        "f1dfa2626e9c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190777 -n newest-cni-190777
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190777 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p old-k8s-version-805353                                                                                                                                                                                                                           │ old-k8s-version-805353            │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-287055        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ image   │ default-k8s-diff-port-129908 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ delete  │ -p test-preload-dl-gcs-287055                                                                                                                                                                                                                       │ test-preload-dl-gcs-287055        │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ start   │ -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-941249     │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-129908 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ image   │ embed-certs-982151 image list --format=json                                                                                                                                                                                                         │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:04 UTC │
	│ pause   │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:04 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p embed-certs-982151 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p default-k8s-diff-port-129908                                                                                                                                                                                                                     │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p test-preload-dl-github-941249                                                                                                                                                                                                                    │ test-preload-dl-github-941249     │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832630 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                 │ test-preload-dl-gcs-cached-832630 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832630                                                                                                                                                                                                                │ test-preload-dl-gcs-cached-832630 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p embed-certs-982151                                                                                                                                                                                                                               │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p default-k8s-diff-port-129908                                                                                                                                                                                                                     │ default-k8s-diff-port-129908      │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ stop    │ -p newest-cni-190777 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ delete  │ -p embed-certs-982151                                                                                                                                                                                                                               │ embed-certs-982151                │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-190777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ image   │ newest-cni-190777 image list --format=json                                                                                                                                                                                                          │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ pause   │ -p newest-cni-190777 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ unpause │ -p newest-cni-190777 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-190777                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:05:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:05:10.393806  900004 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:10.393948  900004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:10.393961  900004 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:10.393967  900004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:10.394165  900004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 07:05:10.394646  900004 out.go:368] Setting JSON to false
	I1228 07:05:10.395667  900004 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13654,"bootTime":1766891856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 07:05:10.395732  900004 start.go:143] virtualization: kvm guest
	I1228 07:05:10.397602  900004 out.go:179] * [newest-cni-190777] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 07:05:10.398667  900004 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:10.398694  900004 notify.go:221] Checking for updates...
	I1228 07:05:10.400682  900004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:10.401741  900004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:10.402739  900004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 07:05:10.403892  900004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 07:05:10.404947  900004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:05:10.406605  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:10.407410  900004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:10.431516  900004 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 07:05:10.431601  900004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:10.487374  900004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-28 07:05:10.478009878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:05:10.487473  900004 docker.go:319] overlay module found
	I1228 07:05:10.489731  900004 out.go:179] * Using the docker driver based on existing profile
	I1228 07:05:10.490919  900004 start.go:309] selected driver: docker
	I1228 07:05:10.490936  900004 start.go:928] validating driver "docker" against &{Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:10.491043  900004 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:05:10.491571  900004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:10.545533  900004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-28 07:05:10.535980615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 07:05:10.545878  900004 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:05:10.545926  900004 cni.go:84] Creating CNI manager for ""
	I1228 07:05:10.546013  900004 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:10.546068  900004 start.go:353] cluster config:
	{Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:10.547681  900004 out.go:179] * Starting "newest-cni-190777" primary control-plane node in "newest-cni-190777" cluster
	I1228 07:05:10.548878  900004 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:05:10.549953  900004 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:05:10.550907  900004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:10.550944  900004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 07:05:10.550962  900004 cache.go:65] Caching tarball of preloaded images
	I1228 07:05:10.551021  900004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:05:10.551064  900004 preload.go:251] Found /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:05:10.551079  900004 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:05:10.551278  900004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/config.json ...
	I1228 07:05:10.570416  900004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:05:10.570433  900004 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:05:10.570462  900004 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:05:10.570512  900004 start.go:360] acquireMachinesLock for newest-cni-190777: {Name:mkb88475b99b1170872fe76b7e2a784e228c1e71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:05:10.570593  900004 start.go:364] duration metric: took 45.554µs to acquireMachinesLock for "newest-cni-190777"
	I1228 07:05:10.570614  900004 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:05:10.570625  900004 fix.go:54] fixHost starting: 
	I1228 07:05:10.570858  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:10.588561  900004 fix.go:112] recreateIfNeeded on newest-cni-190777: state=Stopped err=<nil>
	W1228 07:05:10.588603  900004 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:05:10.590184  900004 out.go:252] * Restarting existing docker container for "newest-cni-190777" ...
	I1228 07:05:10.590258  900004 cli_runner.go:164] Run: docker start newest-cni-190777
	I1228 07:05:10.826355  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:10.843692  900004 kic.go:430] container "newest-cni-190777" state is running.
	I1228 07:05:10.844134  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:10.862082  900004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/config.json ...
	I1228 07:05:10.862382  900004 machine.go:94] provisionDockerMachine start ...
	I1228 07:05:10.862463  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:10.880241  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:10.880538  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:10.880554  900004 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:05:10.881370  900004 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51356->127.0.0.1:33143: read: connection reset by peer
	I1228 07:05:14.005566  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-190777
	
	I1228 07:05:14.005597  900004 ubuntu.go:182] provisioning hostname "newest-cni-190777"
	I1228 07:05:14.005674  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.023817  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:14.024050  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:14.024063  900004 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-190777 && echo "newest-cni-190777" | sudo tee /etc/hostname
	I1228 07:05:14.156389  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-190777
	
	I1228 07:05:14.156486  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.173835  900004 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:14.174135  900004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1228 07:05:14.174157  900004 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190777/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:05:14.295865  900004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:05:14.295893  900004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-552174/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-552174/.minikube}
	I1228 07:05:14.295926  900004 ubuntu.go:190] setting up certificates
	I1228 07:05:14.295947  900004 provision.go:84] configureAuth start
	I1228 07:05:14.296023  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:14.312985  900004 provision.go:143] copyHostCerts
	I1228 07:05:14.313064  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem, removing ...
	I1228 07:05:14.313088  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem
	I1228 07:05:14.313176  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/ca.pem (1078 bytes)
	I1228 07:05:14.313329  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem, removing ...
	I1228 07:05:14.313342  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem
	I1228 07:05:14.313378  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/cert.pem (1123 bytes)
	I1228 07:05:14.313460  900004 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem, removing ...
	I1228 07:05:14.313469  900004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem
	I1228 07:05:14.313497  900004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-552174/.minikube/key.pem (1675 bytes)
	I1228 07:05:14.313569  900004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190777 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-190777]
	I1228 07:05:14.447685  900004 provision.go:177] copyRemoteCerts
	I1228 07:05:14.447764  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:05:14.447811  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.465416  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.557120  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:05:14.575088  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:05:14.592738  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:05:14.609700  900004 provision.go:87] duration metric: took 313.718646ms to configureAuth
	I1228 07:05:14.609726  900004 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:05:14.609902  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:14.609917  900004 machine.go:97] duration metric: took 3.747516141s to provisionDockerMachine
	I1228 07:05:14.609928  900004 start.go:293] postStartSetup for "newest-cni-190777" (driver="docker")
	I1228 07:05:14.609941  900004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:05:14.609996  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:05:14.610045  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.628822  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.719656  900004 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:05:14.723197  900004 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:05:14.723237  900004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:05:14.723249  900004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/addons for local assets ...
	I1228 07:05:14.723298  900004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-552174/.minikube/files for local assets ...
	I1228 07:05:14.723372  900004 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem -> 5558782.pem in /etc/ssl/certs
	I1228 07:05:14.723465  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:05:14.730931  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:05:14.747882  900004 start.go:296] duration metric: took 137.937725ms for postStartSetup
	I1228 07:05:14.747974  900004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:05:14.748019  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.765698  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.854348  900004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:05:14.858974  900004 fix.go:56] duration metric: took 4.288344457s for fixHost
	I1228 07:05:14.859005  900004 start.go:83] releasing machines lock for "newest-cni-190777", held for 4.288397064s
	I1228 07:05:14.859079  900004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190777
	I1228 07:05:14.876492  900004 ssh_runner.go:195] Run: cat /version.json
	I1228 07:05:14.876536  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.876566  900004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:05:14.876676  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:14.894678  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:14.894947  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:15.036051  900004 ssh_runner.go:195] Run: systemctl --version
	I1228 07:05:15.042856  900004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:05:15.047548  900004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:05:15.047644  900004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:05:15.055474  900004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:05:15.055497  900004 start.go:496] detecting cgroup driver to use...
	I1228 07:05:15.055526  900004 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 07:05:15.055574  900004 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:05:15.070948  900004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:05:15.082930  900004 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:05:15.082973  900004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:05:15.096547  900004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:05:15.107966  900004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:05:15.183858  900004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:05:15.261886  900004 docker.go:234] disabling docker service ...
	I1228 07:05:15.261957  900004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:05:15.275857  900004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:05:15.287914  900004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:05:15.365210  900004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:05:15.444639  900004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:05:15.457558  900004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:05:15.472339  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:05:15.481165  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:05:15.489818  900004 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:05:15.489883  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:05:15.498431  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:15.506828  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:05:15.515271  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:15.523537  900004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:05:15.531367  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:05:15.539547  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:05:15.547913  900004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:05:15.556265  900004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:05:15.563253  900004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:05:15.570251  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:15.645715  900004 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:05:15.747254  900004 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:05:15.747320  900004 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:05:15.751677  900004 start.go:574] Will wait 60s for crictl version
	I1228 07:05:15.751732  900004 ssh_runner.go:195] Run: which crictl
	I1228 07:05:15.755366  900004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:05:15.780374  900004 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:05:15.780438  900004 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:15.803257  900004 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:15.827968  900004 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:05:15.829178  900004 cli_runner.go:164] Run: docker network inspect newest-cni-190777 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:05:15.846435  900004 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1228 07:05:15.850689  900004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:15.862468  900004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 07:05:15.863440  900004 kubeadm.go:884] updating cluster {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:05:15.863578  900004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:15.863641  900004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:15.890088  900004 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:15.890117  900004 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:05:15.890183  900004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:15.914187  900004 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:15.914212  900004 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:05:15.914233  900004 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 containerd true true} ...
	I1228 07:05:15.914359  900004 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-190777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:05:15.914435  900004 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:05:15.940000  900004 cni.go:84] Creating CNI manager for ""
	I1228 07:05:15.940026  900004 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:15.940048  900004 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 07:05:15.940080  900004 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190777 NodeName:newest-cni-190777 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:05:15.940205  900004 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-190777"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:05:15.940354  900004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:05:15.948654  900004 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:05:15.948707  900004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:05:15.956396  900004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:05:15.968457  900004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:05:15.980652  900004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1228 07:05:15.992746  900004 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:05:15.996237  900004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:16.006135  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:16.084895  900004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:05:16.115569  900004 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777 for IP: 192.168.103.2
	I1228 07:05:16.115603  900004 certs.go:195] generating shared ca certs ...
	I1228 07:05:16.115629  900004 certs.go:227] acquiring lock for ca certs: {Name:mkf3a34076ce55c96c0ca7e803bd863f5c48e3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.115803  900004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key
	I1228 07:05:16.115886  900004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key
	I1228 07:05:16.115901  900004 certs.go:257] generating profile certs ...
	I1228 07:05:16.116024  900004 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/client.key
	I1228 07:05:16.116102  900004 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key.74b88bb9
	I1228 07:05:16.116159  900004 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key
	I1228 07:05:16.116331  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem (1338 bytes)
	W1228 07:05:16.116380  900004 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878_empty.pem, impossibly tiny 0 bytes
	I1228 07:05:16.116390  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:05:16.116426  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/ca.pem (1078 bytes)
	I1228 07:05:16.116459  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:05:16.116493  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/certs/key.pem (1675 bytes)
	I1228 07:05:16.116552  900004 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem (1708 bytes)
	I1228 07:05:16.117585  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:05:16.137146  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:05:16.156405  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:05:16.175078  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:05:16.197399  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:05:16.219050  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:05:16.236681  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:05:16.253553  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/newest-cni-190777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:05:16.270285  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/certs/555878.pem --> /usr/share/ca-certificates/555878.pem (1338 bytes)
	I1228 07:05:16.287095  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/ssl/certs/5558782.pem --> /usr/share/ca-certificates/5558782.pem (1708 bytes)
	I1228 07:05:16.306160  900004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-552174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:05:16.323252  900004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:05:16.335548  900004 ssh_runner.go:195] Run: openssl version
	I1228 07:05:16.342164  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.349195  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/555878.pem /etc/ssl/certs/555878.pem
	I1228 07:05:16.356663  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.360426  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.360481  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/555878.pem
	I1228 07:05:16.394407  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:05:16.402398  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.409706  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5558782.pem /etc/ssl/certs/5558782.pem
	I1228 07:05:16.417048  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.420805  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.420859  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5558782.pem
	I1228 07:05:16.455924  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:05:16.464153  900004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.471456  900004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:05:16.478780  900004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.482612  900004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.482669  900004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:16.515790  900004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:05:16.523108  900004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:05:16.526664  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:05:16.561497  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:05:16.597107  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:05:16.634177  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:05:16.675473  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:05:16.727906  900004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:05:16.774799  900004 kubeadm.go:401] StartCluster: {Name:newest-cni-190777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-190777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:16.774959  900004 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:05:16.794660  900004 cri.go:83] list returned 5 containers
	I1228 07:05:16.794735  900004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:05:16.805252  900004 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:05:16.805275  900004 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:05:16.805330  900004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:05:16.815987  900004 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:05:16.816640  900004 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-190777" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:16.816821  900004 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-552174/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-190777" cluster setting kubeconfig missing "newest-cni-190777" context setting]
	I1228 07:05:16.817241  900004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.818869  900004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:05:16.830867  900004 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1228 07:05:16.830909  900004 kubeadm.go:602] duration metric: took 25.625265ms to restartPrimaryControlPlane
	I1228 07:05:16.830919  900004 kubeadm.go:403] duration metric: took 56.134163ms to StartCluster
	I1228 07:05:16.830935  900004 settings.go:142] acquiring lock: {Name:mk0a3f928fed4bf13f8897ea15768d1c7b315118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.831013  900004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 07:05:16.832276  900004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/kubeconfig: {Name:mk8eb66c78260a0013ac235827a08f86055faf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:16.832530  900004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:05:16.832712  900004 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:05:16.832822  900004 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-190777"
	I1228 07:05:16.832851  900004 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-190777"
	W1228 07:05:16.832865  900004 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:05:16.832860  900004 addons.go:70] Setting default-storageclass=true in profile "newest-cni-190777"
	I1228 07:05:16.832896  900004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190777"
	I1228 07:05:16.832900  900004 config.go:182] Loaded profile config "newest-cni-190777": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:16.832910  900004 addons.go:70] Setting metrics-server=true in profile "newest-cni-190777"
	I1228 07:05:16.832928  900004 addons.go:239] Setting addon metrics-server=true in "newest-cni-190777"
	W1228 07:05:16.832935  900004 addons.go:248] addon metrics-server should already be in state true
	I1228 07:05:16.832899  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.832952  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.833290  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833443  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833502  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.833556  900004 addons.go:70] Setting dashboard=true in profile "newest-cni-190777"
	I1228 07:05:16.833581  900004 addons.go:239] Setting addon dashboard=true in "newest-cni-190777"
	W1228 07:05:16.833590  900004 addons.go:248] addon dashboard should already be in state true
	I1228 07:05:16.833647  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.834090  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.836339  900004 out.go:179] * Verifying Kubernetes components...
	I1228 07:05:16.838515  900004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:16.860964  900004 addons.go:239] Setting addon default-storageclass=true in "newest-cni-190777"
	W1228 07:05:16.860993  900004 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:05:16.861110  900004 host.go:66] Checking if "newest-cni-190777" exists ...
	I1228 07:05:16.861843  900004 cli_runner.go:164] Run: docker container inspect newest-cni-190777 --format={{.State.Status}}
	I1228 07:05:16.869633  900004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:05:16.870489  900004 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:05:16.871027  900004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:05:16.871055  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:05:16.871140  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.871854  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:05:16.871879  900004 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:05:16.871943  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.872103  900004 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:05:16.873809  900004 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:05:16.874851  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:05:16.874871  900004 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:05:16.874934  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.904859  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.905475  900004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:05:16.905503  900004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:05:16.905566  900004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190777
	I1228 07:05:16.905981  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.909327  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:16.936278  900004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/newest-cni-190777/id_rsa Username:docker}
	I1228 07:05:17.007706  900004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:05:17.024849  900004 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:05:17.024932  900004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:05:17.027968  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:05:17.027988  900004 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:05:17.029114  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:05:17.029131  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:05:17.033257  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:05:17.045286  900004 api_server.go:72] duration metric: took 212.718533ms to wait for apiserver process to appear ...
	I1228 07:05:17.045321  900004 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:05:17.045353  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:17.047627  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:05:17.047649  900004 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:05:17.049866  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:05:17.049890  900004 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:05:17.053971  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:05:17.066659  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:05:17.066683  900004 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:05:17.072603  900004 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:05:17.073069  900004 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:05:17.084118  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:05:17.084135  900004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:05:17.093327  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:05:17.102101  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:05:17.102123  900004 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:05:17.122741  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:05:17.122764  900004 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:05:17.137698  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:05:17.137721  900004 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:05:17.152247  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:05:17.152267  900004 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:05:17.165972  900004 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:05:17.165994  900004 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:05:17.178845  900004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:05:18.225381  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 07:05:18.225409  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 07:05:18.225427  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:18.234327  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 07:05:18.234432  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 07:05:18.546065  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:18.551407  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:05:18.551435  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:05:18.774476  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.741180341s)
	I1228 07:05:18.774520  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.720526625s)
	I1228 07:05:18.790482  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69711601s)
	I1228 07:05:18.790524  900004 addons.go:495] Verifying addon metrics-server=true in "newest-cni-190777"
	I1228 07:05:18.790593  900004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.611708535s)
	I1228 07:05:18.792211  900004 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-190777 addons enable metrics-server
	
	I1228 07:05:18.793426  900004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1228 07:05:18.794524  900004 addons.go:530] duration metric: took 1.961820285s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1228 07:05:19.045848  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:19.050039  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 07:05:19.050068  900004 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 07:05:19.546460  900004 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 07:05:19.551281  900004 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1228 07:05:19.552473  900004 api_server.go:141] control plane version: v1.35.0
	I1228 07:05:19.552502  900004 api_server.go:131] duration metric: took 2.507172453s to wait for apiserver health ...
	I1228 07:05:19.552513  900004 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:05:19.556813  900004 system_pods.go:59] 9 kube-system pods found
	I1228 07:05:19.556861  900004 system_pods.go:61] "coredns-7d764666f9-4jmjw" [3e1ad652-5902-4ab5-a6f3-d7a1b31f4bbe] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556882  900004 system_pods.go:61] "etcd-newest-cni-190777" [27a1a0b7-aa37-43ea-b386-98ecd563f757] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:05:19.556906  900004 system_pods.go:61] "kindnet-zzjsv" [f2a105ad-3a98-4a28-9085-808241a62768] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:05:19.556926  900004 system_pods.go:61] "kube-apiserver-newest-cni-190777" [c623afdc-762c-422b-bc01-6835a086e78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:05:19.556940  900004 system_pods.go:61] "kube-controller-manager-newest-cni-190777" [9716ceb3-ce70-4935-9688-8bbaa305f037] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:05:19.556953  900004 system_pods.go:61] "kube-proxy-jtmkx" [ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:05:19.556962  900004 system_pods.go:61] "kube-scheduler-newest-cni-190777" [13d890be-e9c3-4161-9fad-7b97d2d8ba26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:05:19.556970  900004 system_pods.go:61] "metrics-server-5d785b57d4-89lc5" [29e0f7b1-e316-43e9-bb28-3427c18190a2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556985  900004 system_pods.go:61] "storage-provisioner" [8af786ab-27b8-44bc-80de-85dcff282c60] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:05:19.556993  900004 system_pods.go:74] duration metric: took 4.474585ms to wait for pod list to return data ...
	I1228 07:05:19.557003  900004 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:05:19.559796  900004 default_sa.go:45] found service account: "default"
	I1228 07:05:19.559816  900004 default_sa.go:55] duration metric: took 2.806415ms for default service account to be created ...
	I1228 07:05:19.559827  900004 kubeadm.go:587] duration metric: took 2.727266199s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:05:19.559846  900004 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:05:19.562689  900004 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 07:05:19.562708  900004 node_conditions.go:123] node cpu capacity is 8
	I1228 07:05:19.562722  900004 node_conditions.go:105] duration metric: took 2.871443ms to run NodePressure ...
	I1228 07:05:19.562733  900004 start.go:242] waiting for startup goroutines ...
	I1228 07:05:19.562740  900004 start.go:247] waiting for cluster config update ...
	I1228 07:05:19.562751  900004 start.go:256] writing updated cluster config ...
	I1228 07:05:19.563004  900004 ssh_runner.go:195] Run: rm -f paused
	I1228 07:05:19.614643  900004 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:05:19.616905  900004 out.go:179] * Done! kubectl is now configured to use "newest-cni-190777" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	83c9f5179f673       32652ff1bbe6b       5 seconds ago       Running             kube-proxy                1                   f644bdb353597       kube-proxy-jtmkx                            kube-system
	6dc1fafeed576       2c9a4b058bd7e       7 seconds ago       Running             kube-controller-manager   1                   7ffd34da11262       kube-controller-manager-newest-cni-190777   kube-system
	6c11d536061f6       550794e3b12ac       7 seconds ago       Running             kube-scheduler            1                   7a619959cadd7       kube-scheduler-newest-cni-190777            kube-system
	0d35837bfc00c       5c6acd67e9cd1       7 seconds ago       Running             kube-apiserver            1                   e008cbe31b34c       kube-apiserver-newest-cni-190777            kube-system
	e262a74ab75ab       0a108f7189562       7 seconds ago       Running             etcd                      1                   668433a17b14d       etcd-newest-cni-190777                      kube-system
	0888a08e04899       32652ff1bbe6b       16 seconds ago      Exited              kube-proxy                0                   6d945055cbfba       kube-proxy-jtmkx                            kube-system
	bf133646ee864       550794e3b12ac       26 seconds ago      Exited              kube-scheduler            0                   10e6f2846895c       kube-scheduler-newest-cni-190777            kube-system
	a08faf32774b6       2c9a4b058bd7e       26 seconds ago      Exited              kube-controller-manager   0                   3925c7a01f4b2       kube-controller-manager-newest-cni-190777   kube-system
	02610c2e73d55       0a108f7189562       26 seconds ago      Exited              etcd                      0                   b6cbf6ad192a5       etcd-newest-cni-190777                      kube-system
	1780088a12194       5c6acd67e9cd1       26 seconds ago      Exited              kube-apiserver            0                   779df21e00a15       kube-apiserver-newest-cni-190777            kube-system
	
	
	==> containerd <==
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.505633575Z" level=info msg="StopPodSandbox for \"6d945055cbfba465592d2a9dd9bfa4f1160b84c45ac14af2ad8bf4481d078ce3\" returns successfully"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.508460743Z" level=info msg="RunPodSandbox for name:\"kube-proxy-jtmkx\"  uid:\"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\"  namespace:\"kube-system\"  attempt:1"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.509960590Z" level=info msg="connecting to shim 6b963351520a6acb0def08068674098b08393371dfd60bf0f0855c11539c8412" address="unix:///run/containerd/s/8c2e40632c6cd3bc305ea2cc51992cb1707a8d1d198d510e7bfedfa0df479661" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.524873955Z" level=info msg="connecting to shim f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5" address="unix:///run/containerd/s/302ed862e7a7f4f84287374b5f7b7ad43a021f2a35f41dcab7b06853a5a846bb" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.566536806Z" level=info msg="RunPodSandbox for name:\"kube-proxy-jtmkx\"  uid:\"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\"  namespace:\"kube-system\"  attempt:1 returns sandbox id \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.569777977Z" level=info msg="CreateContainer within sandbox \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\" for container name:\"kube-proxy\"  attempt:1"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.577792742Z" level=info msg="Container 83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.586045900Z" level=info msg="CreateContainer within sandbox \"f644bdb3535974d48306bee1e05e3b8816d46ca8e947df02b23b584fab2521b5\" for name:\"kube-proxy\"  attempt:1 returns container id \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.586498613Z" level=info msg="StartContainer for \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.587959824Z" level=info msg="connecting to shim 83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178" address="unix:///run/containerd/s/302ed862e7a7f4f84287374b5f7b7ad43a021f2a35f41dcab7b06853a5a846bb" protocol=ttrpc version=3
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.664782626Z" level=info msg="StartContainer for \"83c9f5179f673e061c5bee98854480bc9024254941d403fd1f5a703a22b26178\" returns successfully"
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.764922454Z" level=info msg="RunPodSandbox for name:\"kindnet-zzjsv\"  uid:\"f2a105ad-3a98-4a28-9085-808241a62768\"  namespace:\"kube-system\"  attempt:1 returns sandbox id \"6b963351520a6acb0def08068674098b08393371dfd60bf0f0855c11539c8412\""
	Dec 28 07:05:19 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:19.767108366Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 28 07:05:20 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:20.613425988Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88: active requests=1, bytes read=5415"
	Dec 28 07:05:20 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:20.617413317Z" level=error msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\" failed" error="rpc error: code = Canceled desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae\": context canceled"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.790645060Z" level=info msg="StopPodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.793898409Z" level=info msg="TearDown network for sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.794046081Z" level=info msg="StopPodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" returns successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.796930832Z" level=info msg="RemovePodSandbox for \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.796982837Z" level=info msg="Forcibly stopping sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\""
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.797497688Z" level=info msg="TearDown network for sandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.800574979Z" level=info msg="Ensure that sandbox dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87 in task-service has been cleanup successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.804600429Z" level=info msg="RemovePodSandbox \"dbb6d724979296d0f4a5fe3f6912a6cf513c1efa48be5bf92fdce2ddb029fc87\" returns successfully"
	Dec 28 07:05:21 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:21.978105488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:05:23 newest-cni-190777 containerd[453]: time="2025-12-28T07:05:23.086007307Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	
	
	==> describe nodes <==
	Name:               newest-cni-190777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-190777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-190777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:04:59 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-190777
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:05:21 +0000   Sun, 28 Dec 2025 07:04:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-190777
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                14c0e987-b2e2-44d1-9c76-f5d4ac4e5339
	  Boot ID:                    b0f6328b-901c-4d58-bf8e-80c711dcb897
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-190777                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22s
	  kube-system                 kindnet-zzjsv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17s
	  kube-system                 kube-apiserver-newest-cni-190777             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-controller-manager-newest-cni-190777    200m (2%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-proxy-jtmkx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-scheduler-newest-cni-190777             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  18s   node-controller  Node newest-cni-190777 event: Registered Node newest-cni-190777 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-190777 event: Registered Node newest-cni-190777 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 27 65 9e 6b ac 08 06
	[  +0.000464] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 2e 94 6c 92 e4 08 06
	[Dec28 07:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[  +0.004985] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 91 2b a7 d5 cf 08 06
	[ +18.742603] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 b5 93 ca e4 71 08 06
	[  +0.109309] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	[  +8.643062] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[ +19.438211] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 70 fa ed e0 93 08 06
	[  +0.000530] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 52 6e 36 23 da 08 06
	[  +0.857565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 1d c3 e6 9a 7e 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 29 4f 66 86 ca 08 06
	[Dec28 07:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 a0 55 33 45 d1 08 06
	[  +0.000392] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 c6 37 a9 4f 21 08 06
	
	
	==> kernel <==
	 07:05:24 up  3:47,  0 user,  load average: 3.20, 3.08, 10.37
	Linux newest-cni-190777 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:05:21 newest-cni-190777 kubelet[1629]: I1228 07:05:21.996416    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f33664d46d270a46d04d25a233fe0f6-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-190777\" (UID: \"2f33664d46d270a46d04d25a233fe0f6\") " pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.777510    1629 apiserver.go:52] "Watching apiserver"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.791295    1629 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801285    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-cni-cfg\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801331    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-xtables-lock\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801411    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a105ad-3a98-4a28-9085-808241a62768-lib-modules\") pod \"kindnet-zzjsv\" (UID: \"f2a105ad-3a98-4a28-9085-808241a62768\") " pod="kube-system/kindnet-zzjsv"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801434    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6-lib-modules\") pod \"kube-proxy-jtmkx\" (UID: \"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\") " pod="kube-system/kube-proxy-jtmkx"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.801466    1629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6-xtables-lock\") pod \"kube-proxy-jtmkx\" (UID: \"ddc1bc5a-c4b2-4e33-a540-aca0afd3fae6\") " pod="kube-system/kube-proxy-jtmkx"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.845992    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846126    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846011    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: I1228 07:05:22.846234    1629 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.852210    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-190777\" already exists" pod="kube-system/kube-scheduler-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.852337    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-190777" containerName="kube-scheduler"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853536    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190777\" already exists" pod="kube-system/etcd-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853620    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-190777" containerName="etcd"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853701    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-190777\" already exists" pod="kube-system/kube-apiserver-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.853837    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-190777" containerName="kube-apiserver"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.854026    1629 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-190777\" already exists" pod="kube-system/kube-controller-manager-newest-cni-190777"
	Dec 28 07:05:22 newest-cni-190777 kubelet[1629]: E1228 07:05:22.854102    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-190777" containerName="kube-controller-manager"
	Dec 28 07:05:23 newest-cni-190777 kubelet[1629]: E1228 07:05:23.848168    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-190777" containerName="kube-scheduler"
	Dec 28 07:05:23 newest-cni-190777 kubelet[1629]: E1228 07:05:23.848522    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-190777" containerName="etcd"
	Dec 28 07:05:23 newest-cni-190777 kubelet[1629]: E1228 07:05:23.848713    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-190777" containerName="kube-apiserver"
	Dec 28 07:05:23 newest-cni-190777 kubelet[1629]: E1228 07:05:23.848844    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-190777" containerName="kube-controller-manager"
	Dec 28 07:05:24 newest-cni-190777 kubelet[1629]: E1228 07:05:24.850540    1629 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-190777" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190777 -n newest-cni-190777
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-190777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz: exit status 1 (81.011349ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4jmjw" not found
	Error from server (NotFound): pods "kindnet-zzjsv" not found
	Error from server (NotFound): pods "metrics-server-5d785b57d4-89lc5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ngt4g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-ffwqz" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-190777 describe pod coredns-7d764666f9-4jmjw kindnet-zzjsv metrics-server-5d785b57d4-89lc5 storage-provisioner dashboard-metrics-scraper-867fb5f87b-ngt4g kubernetes-dashboard-b84665fb8-ffwqz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.23s)

                                                
                                    

Test pass (301/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.76
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 10.81
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.08
18 TestDownloadOnly/v1.35.0/DeleteAll 0.23
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.83
22 TestOffline 47.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 127.56
29 TestAddons/serial/Volcano 38.95
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 14.93
36 TestAddons/parallel/RegistryCreds 0.66
37 TestAddons/parallel/Ingress 19.13
38 TestAddons/parallel/InspektorGadget 10.65
39 TestAddons/parallel/MetricsServer 5.87
41 TestAddons/parallel/CSI 50.26
42 TestAddons/parallel/Headlamp 17.39
43 TestAddons/parallel/CloudSpanner 5.46
44 TestAddons/parallel/LocalPath 10.12
45 TestAddons/parallel/NvidiaDevicePlugin 5.45
46 TestAddons/parallel/Yakd 10.68
47 TestAddons/parallel/AmdGpuDevicePlugin 5.49
48 TestAddons/StoppedEnableDisable 12.58
49 TestCertOptions 27.32
50 TestCertExpiration 217.63
52 TestForceSystemdFlag 32.66
53 TestForceSystemdEnv 28.38
54 TestDockerEnvContainerd 32.72
58 TestErrorSpam/setup 19.17
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 1.21
62 TestErrorSpam/unpause 1.22
63 TestErrorSpam/stop 2.01
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.94
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.76
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.46
75 TestFunctional/serial/CacheCmd/cache/add_local 2.02
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 26.78
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.67
86 TestFunctional/serial/LogsFileCmd 0.68
87 TestFunctional/serial/InvalidService 3.62
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 31.32
91 TestFunctional/parallel/DryRun 0.41
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 12.54
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 44.46
101 TestFunctional/parallel/SSHCmd 0.66
102 TestFunctional/parallel/CpCmd 1.66
103 TestFunctional/parallel/MySQL 26.41
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
113 TestFunctional/parallel/License 0.54
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 26.26
119 TestFunctional/parallel/MountCmd/any-port 25.82
120 TestFunctional/parallel/MountCmd/specific-port 1.9
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
128 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.52
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
132 TestFunctional/parallel/ProfileCmd/profile_list 0.41
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.34
139 TestFunctional/parallel/ImageCommands/Setup 1.05
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
143 TestFunctional/parallel/ServiceCmd/List 1.34
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.35
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.57
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 107.77
163 TestMultiControlPlane/serial/DeployApp 6.11
164 TestMultiControlPlane/serial/PingHostFromPods 1.2
165 TestMultiControlPlane/serial/AddWorkerNode 28.71
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 16.96
169 TestMultiControlPlane/serial/StopSecondaryNode 12.67
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.74
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.62
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.37
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
176 TestMultiControlPlane/serial/StopCluster 36.08
177 TestMultiControlPlane/serial/RestartCluster 55.56
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 39.3
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 39.07
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.45
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.43
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 31.97
211 TestKicCustomNetwork/use_default_bridge_network 22.27
212 TestKicExistingNetwork 22.9
213 TestKicCustomSubnet 20.7
214 TestKicStaticIP 23.17
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 42.32
219 TestMountStart/serial/StartWithMountFirst 4.43
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.41
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.65
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.6
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 68.9
231 TestMultiNode/serial/DeployApp2Nodes 5.23
232 TestMultiNode/serial/PingHostFrom2Pods 0.82
233 TestMultiNode/serial/AddNode 28.23
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.59
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 6.8
239 TestMultiNode/serial/RestartKeepsNodes 70.72
240 TestMultiNode/serial/DeleteNode 5.29
241 TestMultiNode/serial/StopMultiNode 23.89
242 TestMultiNode/serial/RestartMultiNode 52.69
243 TestMultiNode/serial/ValidateNameConflict 20.31
250 TestScheduledStopUnix 91.96
253 TestInsufficientStorage 11.43
254 TestRunningBinaryUpgrade 70.01
256 TestKubernetesUpgrade 312.97
257 TestMissingContainerUpgrade 80.48
265 TestNetworkPlugins/group/false 5.39
269 TestPreload/Start-NoPreload-PullImage 62.72
277 TestPreload/Restart-With-Preload-Check-User-Image 52.2
279 TestPause/serial/Start 38.99
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 21.33
284 TestPause/serial/SecondStartNoReconfiguration 5.77
285 TestNoKubernetes/serial/StartWithStopK8s 5.01
286 TestPause/serial/Pause 0.5
288 TestNoKubernetes/serial/Start 3.76
289 TestStoppedBinaryUpgrade/Setup 3.75
290 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
292 TestNoKubernetes/serial/ProfileList 1.4
293 TestNoKubernetes/serial/Stop 1.29
294 TestStoppedBinaryUpgrade/Upgrade 285.99
295 TestNoKubernetes/serial/StartNoArgs 6.51
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
297 TestNetworkPlugins/group/auto/Start 43.74
298 TestNetworkPlugins/group/kindnet/Start 41.5
299 TestNetworkPlugins/group/auto/KubeletFlags 0.29
300 TestNetworkPlugins/group/auto/NetCatPod 8.2
301 TestNetworkPlugins/group/auto/DNS 0.12
302 TestNetworkPlugins/group/auto/Localhost 0.12
303 TestNetworkPlugins/group/auto/HairPin 0.11
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
306 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
307 TestNetworkPlugins/group/calico/Start 48.84
308 TestNetworkPlugins/group/kindnet/DNS 0.14
309 TestNetworkPlugins/group/kindnet/Localhost 0.13
310 TestNetworkPlugins/group/kindnet/HairPin 0.12
311 TestNetworkPlugins/group/custom-flannel/Start 51.56
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/calico/KubeletFlags 0.32
314 TestNetworkPlugins/group/calico/NetCatPod 9.33
315 TestNetworkPlugins/group/calico/DNS 0.13
316 TestNetworkPlugins/group/calico/Localhost 0.1
317 TestNetworkPlugins/group/calico/HairPin 0.11
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.24
320 TestNetworkPlugins/group/custom-flannel/DNS 0.16
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
323 TestNetworkPlugins/group/enable-default-cni/Start 62.26
324 TestNetworkPlugins/group/flannel/Start 48.62
325 TestNetworkPlugins/group/bridge/Start 65.8
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
328 TestNetworkPlugins/group/flannel/NetCatPod 9.21
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.21
331 TestNetworkPlugins/group/flannel/DNS 0.13
332 TestNetworkPlugins/group/flannel/Localhost 0.11
333 TestNetworkPlugins/group/flannel/HairPin 0.11
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
338 TestNetworkPlugins/group/bridge/NetCatPod 8.2
339 TestNetworkPlugins/group/bridge/DNS 0.15
340 TestNetworkPlugins/group/bridge/Localhost 0.14
341 TestNetworkPlugins/group/bridge/HairPin 0.13
343 TestStartStop/group/old-k8s-version/serial/FirstStart 51.06
345 TestStartStop/group/no-preload/serial/FirstStart 49.01
347 TestStartStop/group/embed-certs/serial/FirstStart 42.69
348 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.59
351 TestStartStop/group/no-preload/serial/DeployApp 9.27
352 TestStartStop/group/old-k8s-version/serial/DeployApp 8.29
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
354 TestStartStop/group/no-preload/serial/Stop 12.1
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
356 TestStartStop/group/old-k8s-version/serial/Stop 12.03
357 TestStartStop/group/embed-certs/serial/DeployApp 10.23
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/no-preload/serial/SecondStart 50.25
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
361 TestStartStop/group/old-k8s-version/serial/SecondStart 50.54
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
364 TestStartStop/group/embed-certs/serial/Stop 12.07
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
366 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.51
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/embed-certs/serial/SecondStart 59.99
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.82
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/newest-cni/serial/FirstStart 25.14
381 TestPreload/PreloadSrc/gcs 12.92
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
388 TestPreload/PreloadSrc/github 9.26
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
391 TestPreload/PreloadSrc/gcs-cached 0.97
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
394 TestStartStop/group/newest-cni/serial/Stop 1.45
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
396 TestStartStop/group/newest-cni/serial/SecondStart 9.64
397 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (13.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-729149 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-729149 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.764615267s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1228 06:28:27.092781  555878 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1228 06:28:27.092868  555878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-729149
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-729149: exit status 85 (76.681105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-729149 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-729149 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:13.382109  555890 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:28:13.382394  555890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:13.382407  555890 out.go:374] Setting ErrFile to fd 2...
	I1228 06:28:13.382413  555890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:13.382626  555890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	W1228 06:28:13.382778  555890 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22352-552174/.minikube/config/config.json: open /home/jenkins/minikube-integration/22352-552174/.minikube/config/config.json: no such file or directory
	I1228 06:28:13.383297  555890 out.go:368] Setting JSON to true
	I1228 06:28:13.384226  555890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11437,"bootTime":1766891856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:28:13.384290  555890 start.go:143] virtualization: kvm guest
	I1228 06:28:13.391531  555890 out.go:99] [download-only-729149] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:28:13.391680  555890 notify.go:221] Checking for updates...
	W1228 06:28:13.391735  555890 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball: no such file or directory
	I1228 06:28:13.392903  555890 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:13.394282  555890 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:28:13.395487  555890 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:28:13.396677  555890 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:28:13.397812  555890 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1228 06:28:13.399775  555890 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:13.400034  555890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:13.424274  555890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:28:13.424393  555890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:13.622261  555890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-28 06:28:13.611806286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:28:13.622372  555890 docker.go:319] overlay module found
	I1228 06:28:13.623699  555890 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:13.623738  555890 start.go:309] selected driver: docker
	I1228 06:28:13.623745  555890 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:13.623839  555890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:13.678498  555890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-28 06:28:13.669415475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:28:13.678682  555890 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:13.679302  555890 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 06:28:13.679500  555890 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:13.681097  555890 out.go:171] Using Docker driver with root privileges
	I1228 06:28:13.682133  555890 cni.go:84] Creating CNI manager for ""
	I1228 06:28:13.682209  555890 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:28:13.682230  555890 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:28:13.682296  555890 start.go:353] cluster config:
	{Name:download-only-729149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-729149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:28:13.683495  555890 out.go:99] Starting "download-only-729149" primary control-plane node in "download-only-729149" cluster
	I1228 06:28:13.683512  555890 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 06:28:13.684461  555890 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:28:13.684509  555890 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 06:28:13.684620  555890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:28:13.700893  555890 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:13.701121  555890 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:28:13.701261  555890 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:13.784541  555890 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1228 06:28:13.784581  555890 cache.go:65] Caching tarball of preloaded images
	I1228 06:28:13.784761  555890 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 06:28:13.786579  555890 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1228 06:28:13.786607  555890 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1228 06:28:13.786614  555890 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1228 06:28:13.898250  555890 preload.go:313] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1228 06:28:13.898388  555890 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-729149 host does not exist
	  To start a cluster, run: "minikube start -p download-only-729149"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-729149
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (10.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-809443 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-809443 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.814060046s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (10.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1228 06:28:38.358289  555878 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 06:28:38.358332  555878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-809443
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-809443: exit status 85 (74.096245ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-729149 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-729149 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ delete  │ -p download-only-729149                                                                                                                                                               │ download-only-729149 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ start   │ -o=json --download-only -p download-only-809443 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-809443 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:27.597574  556281 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:28:27.597822  556281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:27.597830  556281 out.go:374] Setting ErrFile to fd 2...
	I1228 06:28:27.597835  556281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:27.598046  556281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:28:27.598517  556281 out.go:368] Setting JSON to true
	I1228 06:28:27.599310  556281 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11452,"bootTime":1766891856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:28:27.599370  556281 start.go:143] virtualization: kvm guest
	I1228 06:28:27.600971  556281 out.go:99] [download-only-809443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:28:27.601141  556281 notify.go:221] Checking for updates...
	I1228 06:28:27.602263  556281 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:27.603536  556281 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:28:27.604657  556281 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:28:27.605662  556281 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:28:27.606701  556281 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1228 06:28:27.608545  556281 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:27.608796  556281 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:27.632959  556281 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:28:27.633067  556281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:27.686523  556281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-28 06:28:27.676854751 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:28:27.686617  556281 docker.go:319] overlay module found
	I1228 06:28:27.688077  556281 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:27.688110  556281 start.go:309] selected driver: docker
	I1228 06:28:27.688116  556281 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:27.688207  556281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:27.745924  556281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-28 06:28:27.736646041 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:28:27.746079  556281 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:27.746625  556281 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 06:28:27.746781  556281 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:27.748266  556281 out.go:171] Using Docker driver with root privileges
	I1228 06:28:27.749203  556281 cni.go:84] Creating CNI manager for ""
	I1228 06:28:27.749301  556281 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:28:27.749317  556281 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:28:27.749377  556281 start.go:353] cluster config:
	{Name:download-only-809443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:download-only-809443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:28:27.750451  556281 out.go:99] Starting "download-only-809443" primary control-plane node in "download-only-809443" cluster
	I1228 06:28:27.750477  556281 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 06:28:27.751408  556281 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:28:27.751437  556281 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 06:28:27.751539  556281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:28:27.768030  556281 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:27.768168  556281 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:28:27.768189  556281 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 06:28:27.768196  556281 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 06:28:27.768207  556281 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 06:28:27.853494  556281 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 06:28:27.853526  556281 cache.go:65] Caching tarball of preloaded images
	I1228 06:28:27.854265  556281 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 06:28:27.855699  556281 out.go:99] Downloading Kubernetes v1.35.0 preload ...
	I1228 06:28:27.855723  556281 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 06:28:27.855729  556281 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1228 06:28:27.962925  556281 preload.go:313] Got checksum from GCS API "7d4614bb45595d2b5caa7075d8cffd01"
	I1228 06:28:27.962994  556281 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:7d4614bb45595d2b5caa7075d8cffd01 -> /home/jenkins/minikube-integration/22352-552174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
	I1228 06:28:37.495165  556281 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 06:28:37.495580  556281 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/download-only-809443/config.json ...
	I1228 06:28:37.495615  556281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/download-only-809443/config.json: {Name:mke5d29eab93af6b1fcc6d97a1e2c8efbd676dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:37.495818  556281 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 06:28:37.496025  556281 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22352-552174/.minikube/cache/linux/amd64/v1.35.0/kubectl
	
	
	* The control-plane node download-only-809443 host does not exist
	  To start a cluster, run: "minikube start -p download-only-809443"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-809443
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-028467 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "download-docker-028467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-028467
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1228 06:28:39.522851  555878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-838456 --alsologtostderr --binary-mirror http://127.0.0.1:34611 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-838456" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-838456
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (47.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-377947 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-377947 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (45.369889697s)
helpers_test.go:176: Cleaning up "offline-containerd-377947" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-377947
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-377947: (2.493782974s)
--- PASS: TestOffline (47.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-704221
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-704221: exit status 85 (69.138812ms)

                                                
                                                
-- stdout --
	* Profile "addons-704221" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-704221"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-704221
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-704221: exit status 85 (66.148455ms)

                                                
                                                
-- stdout --
	* Profile "addons-704221" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-704221"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (127.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-704221 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-704221 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.564034064s)
--- PASS: TestAddons/Setup (127.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 13.58932ms
addons_test.go:878: volcano-admission stabilized in 13.815429ms
addons_test.go:886: volcano-controller stabilized in 14.075047ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-gwhzv" [010180ba-15f9-4610-b78b-2f13efe3a736] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003604746s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-vrlhw" [30372bf5-720e-406d-b42b-377e15e8f69e] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004193214s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-cg5hk" [5ff452f5-f5c5-4d08-b352-9e78672252f1] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003466654s
addons_test.go:905: (dbg) Run:  kubectl --context addons-704221 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-704221 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-704221 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [db3cea20-136d-4c8a-a9c2-a02cfe9990bd] Pending
helpers_test.go:353: "test-job-nginx-0" [db3cea20-136d-4c8a-a9c2-a02cfe9990bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [db3cea20-136d-4c8a-a9c2-a02cfe9990bd] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003280385s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable volcano --alsologtostderr -v=1: (11.587859657s)
--- PASS: TestAddons/serial/Volcano (38.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-704221 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-704221 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-704221 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-704221 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [fd9fc83d-53dd-472c-b97c-fbdcd8fb7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [fd9fc83d-53dd-472c-b97c-fbdcd8fb7cd0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004340918s
addons_test.go:696: (dbg) Run:  kubectl --context addons-704221 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-704221 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-704221 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 35.065317ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-82qdb" [269d3807-2198-4808-8abc-dfcea2ba7a56] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003952908s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-t77wz" [bc5deb8d-3b37-4003-98d6-99386eb7bef8] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004055672s
addons_test.go:394: (dbg) Run:  kubectl --context addons-704221 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-704221 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-704221 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.136584409s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 ip
2025/12/28 06:32:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.93s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.511121ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-704221
addons_test.go:334: (dbg) Run:  kubectl --context addons-704221 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-704221 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-704221 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-704221 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [3d1ddcc6-0a8d-4020-946d-c2dc8c16ef2d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [3d1ddcc6-0a8d-4020-946d-c2dc8c16ef2d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003835843s
I1228 06:32:01.309918  555878 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-704221 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable ingress-dns --alsologtostderr -v=1: (1.03550614s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable ingress --alsologtostderr -v=1: (7.812718421s)
--- PASS: TestAddons/parallel/Ingress (19.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-h9nhj" [2ff80530-eede-4475-bd07-e4f4eccb11ac] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0037777s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable inspektor-gadget --alsologtostderr -v=1: (5.64799053s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 34.999792ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-ld4hm" [d4fc7fad-de1f-4126-a1dd-23dc98ec03ab] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003992152s
addons_test.go:465: (dbg) Run:  kubectl --context addons-704221 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1228 06:31:51.467905  555878 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1228 06:31:51.473188  555878 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1228 06:31:51.473256  555878 kapi.go:107] duration metric: took 5.383954ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.41139ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-704221 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-704221 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ae5cb48d-6172-42c3-ada8-4f2cc621b443] Pending
helpers_test.go:353: "task-pv-pod" [ae5cb48d-6172-42c3-ada8-4f2cc621b443] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [ae5cb48d-6172-42c3-ada8-4f2cc621b443] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00342311s
addons_test.go:574: (dbg) Run:  kubectl --context addons-704221 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-704221 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-704221 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-704221 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-704221 delete pod task-pv-pod: (1.117063942s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-704221 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-704221 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-704221 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [7bd2e828-27a4-4004-988b-93833ff7d886] Pending
helpers_test.go:353: "task-pv-pod-restore" [7bd2e828-27a4-4004-988b-93833ff7d886] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.004187373s
addons_test.go:616: (dbg) Run:  kubectl --context addons-704221 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-704221 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-704221 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.514420405s)
--- PASS: TestAddons/parallel/CSI (50.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-704221 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-685fj" [16d6d203-fe0c-483e-a7fa-d3c1dc0b8041] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-685fj" [16d6d203-fe0c-483e-a7fa-d3c1dc0b8041] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003664656s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable headlamp --alsologtostderr -v=1: (5.673262004s)
--- PASS: TestAddons/parallel/Headlamp (17.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-v59lc" [3d6d29a0-ce33-40bf-b64e-a0dcd7a2b50b] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003424217s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-704221 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-704221 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [bb57e7af-3c39-4881-8f05-cb3d4570c46c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [bb57e7af-3c39-4881-8f05-cb3d4570c46c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [bb57e7af-3c39-4881-8f05-cb3d4570c46c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003405196s
addons_test.go:969: (dbg) Run:  kubectl --context addons-704221 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 ssh "cat /opt/local-path-provisioner/pvc-1dceef93-b222-441a-a891-d2f9efdf0f59_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-704221 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-704221 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-s7h7d" [51571023-9229-4f3b-939f-5cf4a485d9fe] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003586528s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-q4khp" [04a8efc8-f2b2-40e3-a158-023f40eb97d3] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003518971s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-704221 addons disable yakd --alsologtostderr -v=1: (5.671092773s)
--- PASS: TestAddons/parallel/Yakd (10.68s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-mdvm2" [dd58607b-e0d2-4bd3-a596-260a074867d0] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003901385s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-704221 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-704221
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-704221: (12.290625336s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-704221
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-704221
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-704221
--- PASS: TestAddons/StoppedEnableDisable (12.58s)

                                                
                                    
x
+
TestCertOptions (27.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-948332 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-948332 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.153483599s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-948332 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-948332 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-948332 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-948332" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-948332
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-948332: (2.45584513s)
--- PASS: TestCertOptions (27.32s)

                                                
                                    
x
+
TestCertExpiration (217.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-160202 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-160202 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (29.814192472s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-160202 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-160202 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.289840578s)
helpers_test.go:176: Cleaning up "cert-expiration-160202" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-160202
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-160202: (2.521043469s)
--- PASS: TestCertExpiration (217.63s)

                                                
                                    
x
+
TestForceSystemdFlag (32.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-438914 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-438914 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.262517052s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-438914 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-438914" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-438914
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-438914: (3.056863958s)
--- PASS: TestForceSystemdFlag (32.66s)

                                                
                                    
x
+
TestForceSystemdEnv (28.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-455558 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1228 06:55:26.766628  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-455558 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.316788036s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-455558 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-455558" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-455558
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-455558: (4.70976272s)
--- PASS: TestForceSystemdEnv (28.38s)

                                                
                                    
x
+
TestDockerEnvContainerd (32.72s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-949565 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-949565 --driver=docker  --container-runtime=containerd: (16.750967663s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-949565"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-949565": (1.019328893s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXzaneXc/agent.579852" SSH_AGENT_PID="579853" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXzaneXc/agent.579852" SSH_AGENT_PID="579853" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXzaneXc/agent.579852" SSH_AGENT_PID="579853" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.136574726s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXzaneXc/agent.579852" SSH_AGENT_PID="579853" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-949565" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-949565
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-949565: (2.279826583s)
--- PASS: TestDockerEnvContainerd (32.72s)

                                                
                                    
x
+
TestErrorSpam/setup (19.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-544265 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-544265 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-544265 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-544265 --driver=docker  --container-runtime=containerd: (19.174667353s)
--- PASS: TestErrorSpam/setup (19.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 unpause
--- PASS: TestErrorSpam/unpause (1.22s)

                                                
                                    
x
+
TestErrorSpam/stop (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 stop: (1.807566738s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-544265 --log_dir /tmp/nospam-544265 stop
--- PASS: TestErrorSpam/stop (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22352-552174/.minikube/files/etc/test/nested/copy/555878/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-933591 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.937347306s)
--- PASS: TestFunctional/serial/StartWithProxy (38.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1228 06:34:41.810479  555878 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-933591 --alsologtostderr -v=8: (5.762481744s)
functional_test.go:678: soft start took 5.76337378s for "functional-933591" cluster.
I1228 06:34:47.573522  555878 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (5.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-933591 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-933591 /tmp/TestFunctionalserialCacheCmdcacheadd_local3328947151/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache add minikube-local-cache-test:functional-933591
functional_test.go:1109: (dbg) Done: out/minikube-linux-amd64 -p functional-933591 cache add minikube-local-cache-test:functional-933591: (1.687976101s)
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache delete minikube-local-cache-test:functional-933591
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-933591
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.291769ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 kubectl -- --context functional-933591 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-933591 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (26.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-933591 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (26.780263282s)
functional_test.go:776: restart took 26.780402632s for "functional-933591" cluster.
I1228 06:35:21.241606  555878 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (26.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-933591 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 logs --file /tmp/TestFunctionalserialLogsFileCmd2469346610/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-933591 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-933591
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-933591: exit status 115 (371.395809ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32129 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-933591 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 config get cpus: exit status 14 (100.843007ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 config get cpus: exit status 14 (78.140356ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-933591 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-933591 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 595460: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933591 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (176.729805ms)

                                                
                                                
-- stdout --
	* [functional-933591] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:35:28.218397  594930 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:35:28.218515  594930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:35:28.218525  594930 out.go:374] Setting ErrFile to fd 2...
	I1228 06:35:28.218529  594930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:35:28.218750  594930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:35:28.219240  594930 out.go:368] Setting JSON to false
	I1228 06:35:28.220345  594930 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11872,"bootTime":1766891856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:35:28.220407  594930 start.go:143] virtualization: kvm guest
	I1228 06:35:28.222108  594930 out.go:179] * [functional-933591] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:35:28.223521  594930 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:35:28.223550  594930 notify.go:221] Checking for updates...
	I1228 06:35:28.226083  594930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:35:28.227178  594930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:35:28.228579  594930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:35:28.229752  594930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:35:28.230870  594930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:35:28.232617  594930 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:35:28.233386  594930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:35:28.257589  594930 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:35:28.257667  594930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:35:28.313823  594930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-28 06:35:28.304047429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:35:28.313942  594930 docker.go:319] overlay module found
	I1228 06:35:28.315568  594930 out.go:179] * Using the docker driver based on existing profile
	I1228 06:35:28.316702  594930 start.go:309] selected driver: docker
	I1228 06:35:28.316719  594930 start.go:928] validating driver "docker" against &{Name:functional-933591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-933591 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:35:28.316827  594930 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:35:28.318439  594930 out.go:203] 
	W1228 06:35:28.319606  594930 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1228 06:35:28.320650  594930 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933591 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933591 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (174.907585ms)

                                                
                                                
-- stdout --
	* [functional-933591] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:35:28.030718  594816 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:35:28.031046  594816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:35:28.031059  594816 out.go:374] Setting ErrFile to fd 2...
	I1228 06:35:28.031066  594816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:35:28.031465  594816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:35:28.032048  594816 out.go:368] Setting JSON to false
	I1228 06:35:28.033084  594816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11872,"bootTime":1766891856,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:35:28.033149  594816 start.go:143] virtualization: kvm guest
	I1228 06:35:28.035184  594816 out.go:179] * [functional-933591] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1228 06:35:28.036397  594816 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:35:28.036398  594816 notify.go:221] Checking for updates...
	I1228 06:35:28.037594  594816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:35:28.038805  594816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:35:28.039938  594816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:35:28.041057  594816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:35:28.042253  594816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:35:28.043851  594816 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:35:28.044399  594816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:35:28.070493  594816 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:35:28.070595  594816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:35:28.135852  594816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-28 06:35:28.123882452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:35:28.136012  594816 docker.go:319] overlay module found
	I1228 06:35:28.137777  594816 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1228 06:35:28.138843  594816 start.go:309] selected driver: docker
	I1228 06:35:28.138864  594816 start.go:928] validating driver "docker" against &{Name:functional-933591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-933591 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:35:28.138956  594816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:35:28.140659  594816 out.go:203] 
	W1228 06:35:28.141729  594816 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1228 06:35:28.142683  594816 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-933591 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-933591 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-x8hj9" [f1d755b1-4c24-4997-9674-bcc682297ba6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-x8hj9" [f1d755b1-4c24-4997-9674-bcc682297ba6] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003051182s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31639
functional_test.go:1685: http://192.168.49.2:31639: success! body:
Request served by hello-node-connect-5d95464fd4-x8hj9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31639
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f1e11acc-c57d-4300-aa65-3529bb68f6c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003087123s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-933591 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-933591 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-933591 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-933591 apply -f testdata/storage-provisioner/pod.yaml
I1228 06:35:33.116645  555878 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [3262c10f-f3d1-468f-a613-905adb2eb5cc] Pending
helpers_test.go:353: "sp-pod" [3262c10f-f3d1-468f-a613-905adb2eb5cc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. not found)
helpers_test.go:353: "sp-pod" [3262c10f-f3d1-468f-a613-905adb2eb5cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [3262c10f-f3d1-468f-a613-905adb2eb5cc] Running
E1228 06:35:58.830347  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.003348296s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-933591 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-933591 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-933591 delete -f testdata/storage-provisioner/pod.yaml: (1.721121087s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-933591 apply -f testdata/storage-provisioner/pod.yaml
I1228 06:36:05.097077  555878 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ff69b35c-e77c-4f68-bec6-65f5798edb4e] Pending
helpers_test.go:353: "sp-pod" [ff69b35c-e77c-4f68-bec6-65f5798edb4e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003617702s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-933591 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh -n functional-933591 "sudo cat /home/docker/cp-test.txt"
E1228 06:36:09.070883  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cp functional-933591:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2336751391/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh -n functional-933591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh -n functional-933591 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-933591 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-6xv7b" [51c99ea0-1c5f-4337-9088-1c89d39adcdd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-6xv7b" [51c99ea0-1c5f-4337-9088-1c89d39adcdd] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003894782s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;": exit status 1 (134.322129ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1228 06:36:28.122992  555878 retry.go:84] will retry after 1s: exit status 1 (duplicate log for 1m1s)
functional_test.go:1817: (dbg) Run:  kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;": exit status 1 (165.357508ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1228 06:36:29.551944  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1817: (dbg) Run:  kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;": exit status 1 (112.046209ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-933591 exec mysql-7d7b65bc95-6xv7b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/555878/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /etc/test/nested/copy/555878/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/555878.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /etc/ssl/certs/555878.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/555878.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /usr/share/ca-certificates/555878.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/5558782.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /etc/ssl/certs/5558782.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/5558782.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /usr/share/ca-certificates/5558782.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-933591 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "sudo systemctl is-active docker": exit status 1 (265.819023ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "sudo systemctl is-active crio": exit status 1 (261.061369ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 593482: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-933591 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [53b7b1f2-e5ac-4708-a00b-2a98c0125529] Pending
helpers_test.go:353: "nginx-svc" [53b7b1f2-e5ac-4708-a00b-2a98c0125529] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [53b7b1f2-e5ac-4708-a00b-2a98c0125529] Running
E1228 06:35:48.588058  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.593393  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.603672  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.623957  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.664286  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.744650  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:48.905556  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:49.225987  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:49.866922  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 26.004457998s
I1228 06:35:53.035428  555878 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdany-port2174035523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766903726807207943" to /tmp/TestFunctionalparallelMountCmdany-port2174035523/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766903726807207943" to /tmp/TestFunctionalparallelMountCmdany-port2174035523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766903726807207943" to /tmp/TestFunctionalparallelMountCmdany-port2174035523/001/test-1766903726807207943
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.099636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:35:27.143709  555878 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 28 06:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 28 06:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 28 06:35 test-1766903726807207943
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh cat /mount-9p/test-1766903726807207943
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-933591 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [4c102384-4ffd-45fc-97d6-8ffdaaf9e91c] Pending
helpers_test.go:353: "busybox-mount" [4c102384-4ffd-45fc-97d6-8ffdaaf9e91c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [4c102384-4ffd-45fc-97d6-8ffdaaf9e91c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1228 06:35:51.148190  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox-mount" [4c102384-4ffd-45fc-97d6-8ffdaaf9e91c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.004470101s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-933591 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdany-port2174035523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdspecific-port2017715251/001:/mount-9p --alsologtostderr -v=1 --port 38123]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.906868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:35:52.940102  555878 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T /mount-9p | grep 9p"
E1228 06:35:53.709363  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdspecific-port2017715251/001:/mount-9p --alsologtostderr -v=1 --port 38123] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "sudo umount -f /mount-9p": exit status 1 (260.669326ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-933591 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdspecific-port2017715251/001:/mount-9p --alsologtostderr -v=1 --port 38123] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-933591 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.159.50 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-933591 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T" /mount1: exit status 1 (323.201106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-933591 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933591 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1770123865/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-933591 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-933591 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-tmc79" [6a0b5138-f8c1-48d9-9bf8-6d4b278e8ca0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-tmc79" [6a0b5138-f8c1-48d9-9bf8-6d4b278e8ca0] Running
2025/12/28 06:35:59 [DEBUG] GET http://127.0.0.1:37255/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004687001s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "345.575635ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "62.267113ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "341.916768ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "65.050653ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933591 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-933591
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933591 image ls --format short --alsologtostderr:
I1228 06:36:10.741651  602712 out.go:360] Setting OutFile to fd 1 ...
I1228 06:36:10.741776  602712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.741788  602712 out.go:374] Setting ErrFile to fd 2...
I1228 06:36:10.741794  602712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.742158  602712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
I1228 06:36:10.742956  602712 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.743113  602712 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.743763  602712 cli_runner.go:164] Run: docker container inspect functional-933591 --format={{.State.Status}}
I1228 06:36:10.765698  602712 ssh_runner.go:195] Run: systemctl --version
I1228 06:36:10.765767  602712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933591
I1228 06:36:10.783634  602712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/functional-933591/id_rsa Username:docker}
I1228 06:36:10.874995  602712 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933591 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:56cc51 │ 2.4MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:04da2b │ 23MB   │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:550794 │ 17.2MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:cd073f │ 320kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:2c9a4b │ 23.1MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:0a108f │ 23.6MB │
│ registry.k8s.io/pause                             │ latest                                │ sha256:350b16 │ 72.3kB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:4921d7 │ 42.7MB │
│ docker.io/library/minikube-local-cache-test       │ functional-933591                     │ sha256:84caa1 │ 991B   │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-933591                     │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:5c6acd │ 27.7MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:32652f │ 25.8MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:0184c1 │ 298kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933591 image ls --format table --alsologtostderr:
I1228 06:36:11.240988  603000 out.go:360] Setting OutFile to fd 1 ...
I1228 06:36:11.241252  603000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:11.241263  603000 out.go:374] Setting ErrFile to fd 2...
I1228 06:36:11.241267  603000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:11.241464  603000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
I1228 06:36:11.242032  603000 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:11.242130  603000 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:11.242647  603000 cli_runner.go:164] Run: docker container inspect functional-933591 --format={{.State.Status}}
I1228 06:36:11.262767  603000 ssh_runner.go:195] Run: systemctl --version
I1228 06:36:11.262819  603000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933591
I1228 06:36:11.281147  603000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/functional-933591/id_rsa Username:docker}
I1228 06:36:11.375905  603000 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933591 image ls --format json --alsologtostderr:
[{"id":"sha256:84caa118a9912aa5ef78b179c94bd3d2fde1590d77d774dc7476243653b85fc0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-933591"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kind
est/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591"],"size":"2372971"},{"id":"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2c
fb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"23641797"},{"id":"sha256:5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"27686334"},{"id":"sha256:2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"23134163"},{"id":"sha256:4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"42673934"},{"id":"sha256:04da2b0513cd78d8d29d60575cef80813c5496c
15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22996569"},{"id":"sha256:32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"25789515"},{"id":"sha256:550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"17237748"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause
@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933591 image ls --format json --alsologtostderr:
I1228 06:36:10.995427  602836 out.go:360] Setting OutFile to fd 1 ...
I1228 06:36:10.995662  602836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.995671  602836 out.go:374] Setting ErrFile to fd 2...
I1228 06:36:10.995675  602836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.995872  602836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
I1228 06:36:10.996422  602836 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.996511  602836 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.996964  602836 cli_runner.go:164] Run: docker container inspect functional-933591 --format={{.State.Status}}
I1228 06:36:11.016717  602836 ssh_runner.go:195] Run: systemctl --version
I1228 06:36:11.016781  602836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933591
I1228 06:36:11.038282  602836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/functional-933591/id_rsa Username:docker}
I1228 06:36:11.133818  602836 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933591 image ls --format yaml --alsologtostderr:
- id: sha256:32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "25789515"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:84caa118a9912aa5ef78b179c94bd3d2fde1590d77d774dc7476243653b85fc0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-933591
size: "991"
- id: sha256:5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "27686334"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22996569"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "42673934"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "23134163"
- id: sha256:550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "17237748"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
size: "2372971"
- id: sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "23641797"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933591 image ls --format yaml --alsologtostderr:
I1228 06:36:10.767087  602724 out.go:360] Setting OutFile to fd 1 ...
I1228 06:36:10.767336  602724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.767347  602724 out.go:374] Setting ErrFile to fd 2...
I1228 06:36:10.767352  602724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:10.767590  602724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
I1228 06:36:10.768147  602724 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.768265  602724 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:10.768717  602724 cli_runner.go:164] Run: docker container inspect functional-933591 --format={{.State.Status}}
I1228 06:36:10.786944  602724 ssh_runner.go:195] Run: systemctl --version
I1228 06:36:10.786991  602724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933591
I1228 06:36:10.804175  602724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/functional-933591/id_rsa Username:docker}
I1228 06:36:10.894286  602724 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933591 ssh pgrep buildkitd: exit status 1 (291.457289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image build -t localhost/my-image:functional-933591 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-933591 image build -t localhost/my-image:functional-933591 testdata/build --alsologtostderr: (3.788744001s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933591 image build -t localhost/my-image:functional-933591 testdata/build --alsologtostderr:
I1228 06:36:11.273096  603029 out.go:360] Setting OutFile to fd 1 ...
I1228 06:36:11.273972  603029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:11.273984  603029 out.go:374] Setting ErrFile to fd 2...
I1228 06:36:11.273989  603029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:36:11.274206  603029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
I1228 06:36:11.274815  603029 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:11.275575  603029 config.go:182] Loaded profile config "functional-933591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:36:11.276239  603029 cli_runner.go:164] Run: docker container inspect functional-933591 --format={{.State.Status}}
I1228 06:36:11.296259  603029 ssh_runner.go:195] Run: systemctl --version
I1228 06:36:11.296310  603029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933591
I1228 06:36:11.315488  603029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/functional-933591/id_rsa Username:docker}
I1228 06:36:11.407313  603029 build_images.go:162] Building image from path: /tmp/build.1188806763.tar
I1228 06:36:11.407371  603029 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1228 06:36:11.416338  603029 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1188806763.tar
I1228 06:36:11.420544  603029 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1188806763.tar: stat -c "%s %y" /var/lib/minikube/build/build.1188806763.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1188806763.tar': No such file or directory
I1228 06:36:11.420585  603029 ssh_runner.go:362] scp /tmp/build.1188806763.tar --> /var/lib/minikube/build/build.1188806763.tar (3072 bytes)
I1228 06:36:11.439675  603029 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1188806763
I1228 06:36:11.447736  603029 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1188806763 -xf /var/lib/minikube/build/build.1188806763.tar
I1228 06:36:11.456719  603029 containerd.go:402] Building image: /var/lib/minikube/build/build.1188806763
I1228 06:36:11.456788  603029 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1188806763 --local dockerfile=/var/lib/minikube/build/build.1188806763 --output type=image,name=localhost/my-image:functional-933591
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:45a2b9d200ee99d35cf6ed441b7974189bc62cd1a24e42ddbe46fe326909a7fd done
#8 exporting config sha256:1a2db574c93f73eea46d10286002928380e9e0aa98de10881928ae9d51f01c10 done
#8 naming to localhost/my-image:functional-933591 done
#8 DONE 0.1s
I1228 06:36:14.964126  603029 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1188806763 --local dockerfile=/var/lib/minikube/build/build.1188806763 --output type=image,name=localhost/my-image:functional-933591: (3.507296816s)
I1228 06:36:14.964231  603029 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1188806763
I1228 06:36:14.975247  603029 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1188806763.tar
I1228 06:36:14.984277  603029 build_images.go:218] Built localhost/my-image:functional-933591 from /tmp/build.1188806763.tar
I1228 06:36:14.984314  603029 build_images.go:134] succeeded building to: functional-933591
I1228 06:36:14.984320  603029 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.025958034s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-933591 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr: (1.327055113s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service list
functional_test.go:1474: (dbg) Done: out/minikube-linux-amd64 -p functional-933591 service list: (1.341934977s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service list -o json
functional_test.go:1504: (dbg) Done: out/minikube-linux-amd64 -p functional-933591 service list -o json: (1.351937094s)
functional_test.go:1509: Took "1.352038682s" to run "out/minikube-linux-amd64 -p functional-933591 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31050
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-933591 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31050
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-933591
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-933591
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-933591
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1228 06:37:10.512361  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m47.039405994s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (107.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 kubectl -- rollout status deployment/busybox: (3.827556311s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-4plln -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-l5qqz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-wfhm4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-4plln -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-l5qqz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-wfhm4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-4plln -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-l5qqz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-wfhm4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-4plln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-4plln -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-l5qqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-l5qqz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-wfhm4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 kubectl -- exec busybox-769dd8b7dd-wfhm4 -- sh -c "ping -c 1 192.168.49.1"
E1228 06:38:32.433578  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 node add --alsologtostderr -v 5: (27.811622792s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-331080 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp testdata/cp-test.txt ha-331080:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452638110/001/cp-test_ha-331080.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080:/home/docker/cp-test.txt ha-331080-m02:/home/docker/cp-test_ha-331080_ha-331080-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test_ha-331080_ha-331080-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080:/home/docker/cp-test.txt ha-331080-m03:/home/docker/cp-test_ha-331080_ha-331080-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test_ha-331080_ha-331080-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080:/home/docker/cp-test.txt ha-331080-m04:/home/docker/cp-test_ha-331080_ha-331080-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test_ha-331080_ha-331080-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp testdata/cp-test.txt ha-331080-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452638110/001/cp-test_ha-331080-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m02:/home/docker/cp-test.txt ha-331080:/home/docker/cp-test_ha-331080-m02_ha-331080.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test_ha-331080-m02_ha-331080.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m02:/home/docker/cp-test.txt ha-331080-m03:/home/docker/cp-test_ha-331080-m02_ha-331080-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test_ha-331080-m02_ha-331080-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m02:/home/docker/cp-test.txt ha-331080-m04:/home/docker/cp-test_ha-331080-m02_ha-331080-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test_ha-331080-m02_ha-331080-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp testdata/cp-test.txt ha-331080-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452638110/001/cp-test_ha-331080-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m03:/home/docker/cp-test.txt ha-331080:/home/docker/cp-test_ha-331080-m03_ha-331080.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test_ha-331080-m03_ha-331080.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m03:/home/docker/cp-test.txt ha-331080-m02:/home/docker/cp-test_ha-331080-m03_ha-331080-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test_ha-331080-m03_ha-331080-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m03:/home/docker/cp-test.txt ha-331080-m04:/home/docker/cp-test_ha-331080-m03_ha-331080-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test_ha-331080-m03_ha-331080-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp testdata/cp-test.txt ha-331080-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452638110/001/cp-test_ha-331080-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m04:/home/docker/cp-test.txt ha-331080:/home/docker/cp-test_ha-331080-m04_ha-331080.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080 "sudo cat /home/docker/cp-test_ha-331080-m04_ha-331080.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m04:/home/docker/cp-test.txt ha-331080-m02:/home/docker/cp-test_ha-331080-m04_ha-331080-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m02 "sudo cat /home/docker/cp-test_ha-331080-m04_ha-331080-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 cp ha-331080-m04:/home/docker/cp-test.txt ha-331080-m03:/home/docker/cp-test_ha-331080-m04_ha-331080-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 ssh -n ha-331080-m03 "sudo cat /home/docker/cp-test_ha-331080-m04_ha-331080-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 node stop m02 --alsologtostderr -v 5: (11.975267719s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5: exit status 7 (694.410554ms)

                                                
                                                
-- stdout --
	ha-331080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-331080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-331080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:39:31.211854  624472 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:39:31.211977  624472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:39:31.211987  624472 out.go:374] Setting ErrFile to fd 2...
	I1228 06:39:31.211993  624472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:39:31.212263  624472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:39:31.212452  624472 out.go:368] Setting JSON to false
	I1228 06:39:31.212478  624472 mustload.go:66] Loading cluster: ha-331080
	I1228 06:39:31.212569  624472 notify.go:221] Checking for updates...
	I1228 06:39:31.212873  624472 config.go:182] Loaded profile config "ha-331080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:39:31.212901  624472 status.go:174] checking status of ha-331080 ...
	I1228 06:39:31.213386  624472 cli_runner.go:164] Run: docker container inspect ha-331080 --format={{.State.Status}}
	I1228 06:39:31.233494  624472 status.go:371] ha-331080 host status = "Running" (err=<nil>)
	I1228 06:39:31.233527  624472 host.go:66] Checking if "ha-331080" exists ...
	I1228 06:39:31.233810  624472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331080
	I1228 06:39:31.252121  624472 host.go:66] Checking if "ha-331080" exists ...
	I1228 06:39:31.252416  624472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:39:31.252468  624472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331080
	I1228 06:39:31.269342  624472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/ha-331080/id_rsa Username:docker}
	I1228 06:39:31.358625  624472 ssh_runner.go:195] Run: systemctl --version
	I1228 06:39:31.365361  624472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:39:31.377136  624472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:39:31.433478  624472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-28 06:39:31.423909516 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:39:31.434038  624472 kubeconfig.go:125] found "ha-331080" server: "https://192.168.49.254:8443"
	I1228 06:39:31.434072  624472 api_server.go:166] Checking apiserver status ...
	I1228 06:39:31.434114  624472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:39:31.447262  624472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1344/cgroup
	I1228 06:39:31.456002  624472 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1344/cgroup
	I1228 06:39:31.464009  624472 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod436108052d27b85771d4812f17bb190d.slice/cri-containerd-a0508084f37a5361804e2af4243783cfd0bc5dfec945dbb9a6ab14faabcde3f0.scope/cgroup.freeze
	I1228 06:39:31.471474  624472 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:39:31.475510  624472 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:39:31.475534  624472 status.go:463] ha-331080 apiserver status = Running (err=<nil>)
	I1228 06:39:31.475546  624472 status.go:176] ha-331080 status: &{Name:ha-331080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:39:31.475570  624472 status.go:174] checking status of ha-331080-m02 ...
	I1228 06:39:31.475833  624472 cli_runner.go:164] Run: docker container inspect ha-331080-m02 --format={{.State.Status}}
	I1228 06:39:31.493505  624472 status.go:371] ha-331080-m02 host status = "Stopped" (err=<nil>)
	I1228 06:39:31.493526  624472 status.go:384] host is not running, skipping remaining checks
	I1228 06:39:31.493535  624472 status.go:176] ha-331080-m02 status: &{Name:ha-331080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:39:31.493566  624472 status.go:174] checking status of ha-331080-m03 ...
	I1228 06:39:31.493870  624472 cli_runner.go:164] Run: docker container inspect ha-331080-m03 --format={{.State.Status}}
	I1228 06:39:31.512235  624472 status.go:371] ha-331080-m03 host status = "Running" (err=<nil>)
	I1228 06:39:31.512265  624472 host.go:66] Checking if "ha-331080-m03" exists ...
	I1228 06:39:31.512506  624472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331080-m03
	I1228 06:39:31.530109  624472 host.go:66] Checking if "ha-331080-m03" exists ...
	I1228 06:39:31.530383  624472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:39:31.530420  624472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331080-m03
	I1228 06:39:31.547504  624472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/ha-331080-m03/id_rsa Username:docker}
	I1228 06:39:31.635909  624472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:39:31.648456  624472 kubeconfig.go:125] found "ha-331080" server: "https://192.168.49.254:8443"
	I1228 06:39:31.648483  624472 api_server.go:166] Checking apiserver status ...
	I1228 06:39:31.648515  624472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:39:31.660175  624472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1277/cgroup
	I1228 06:39:31.668480  624472 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1277/cgroup
	I1228 06:39:31.676787  624472 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1a9574b548b6df76bd80731f84b63a0.slice/cri-containerd-7de9b246785f36668dd2ccf014b3b060e8fbe7558a109c45b031404c807203a6.scope/cgroup.freeze
	I1228 06:39:31.684332  624472 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:39:31.689157  624472 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:39:31.689178  624472 status.go:463] ha-331080-m03 apiserver status = Running (err=<nil>)
	I1228 06:39:31.689187  624472 status.go:176] ha-331080-m03 status: &{Name:ha-331080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:39:31.689209  624472 status.go:174] checking status of ha-331080-m04 ...
	I1228 06:39:31.689487  624472 cli_runner.go:164] Run: docker container inspect ha-331080-m04 --format={{.State.Status}}
	I1228 06:39:31.706150  624472 status.go:371] ha-331080-m04 host status = "Running" (err=<nil>)
	I1228 06:39:31.706171  624472 host.go:66] Checking if "ha-331080-m04" exists ...
	I1228 06:39:31.706469  624472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331080-m04
	I1228 06:39:31.723718  624472 host.go:66] Checking if "ha-331080-m04" exists ...
	I1228 06:39:31.723976  624472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:39:31.724012  624472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331080-m04
	I1228 06:39:31.740904  624472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/ha-331080-m04/id_rsa Username:docker}
	I1228 06:39:31.829964  624472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:39:31.842592  624472 status.go:176] ha-331080-m04 status: &{Name:ha-331080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 node start m02 --alsologtostderr -v 5: (7.774963082s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 stop --alsologtostderr -v 5: (37.261879101s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 start --wait true --alsologtostderr -v 5
E1228 06:40:26.766610  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:26.771955  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:26.782402  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:26.802922  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:26.843300  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:26.923706  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:27.084251  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:27.404862  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:28.045458  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:29.326581  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:31.887473  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:37.007702  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:47.247880  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:48.588380  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:07.728368  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:16.274258  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 start --wait true --alsologtostderr -v 5: (1m2.220884941s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 node delete m03 --alsologtostderr -v 5: (8.581809556s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 stop --alsologtostderr -v 5
E1228 06:41:48.688610  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 stop --alsologtostderr -v 5: (35.957808411s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5: exit status 7 (119.266278ms)

                                                
                                                
-- stdout --
	ha-331080
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331080-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:42:07.917192  640732 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:42:07.917323  640732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:42:07.917337  640732 out.go:374] Setting ErrFile to fd 2...
	I1228 06:42:07.917343  640732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:42:07.917522  640732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:42:07.917693  640732 out.go:368] Setting JSON to false
	I1228 06:42:07.917719  640732 mustload.go:66] Loading cluster: ha-331080
	I1228 06:42:07.917846  640732 notify.go:221] Checking for updates...
	I1228 06:42:07.918072  640732 config.go:182] Loaded profile config "ha-331080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:42:07.918093  640732 status.go:174] checking status of ha-331080 ...
	I1228 06:42:07.918574  640732 cli_runner.go:164] Run: docker container inspect ha-331080 --format={{.State.Status}}
	I1228 06:42:07.937247  640732 status.go:371] ha-331080 host status = "Stopped" (err=<nil>)
	I1228 06:42:07.937290  640732 status.go:384] host is not running, skipping remaining checks
	I1228 06:42:07.937304  640732 status.go:176] ha-331080 status: &{Name:ha-331080 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:42:07.937344  640732 status.go:174] checking status of ha-331080-m02 ...
	I1228 06:42:07.937781  640732 cli_runner.go:164] Run: docker container inspect ha-331080-m02 --format={{.State.Status}}
	I1228 06:42:07.957273  640732 status.go:371] ha-331080-m02 host status = "Stopped" (err=<nil>)
	I1228 06:42:07.957312  640732 status.go:384] host is not running, skipping remaining checks
	I1228 06:42:07.957323  640732 status.go:176] ha-331080-m02 status: &{Name:ha-331080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:42:07.957351  640732 status.go:174] checking status of ha-331080-m04 ...
	I1228 06:42:07.957646  640732 cli_runner.go:164] Run: docker container inspect ha-331080-m04 --format={{.State.Status}}
	I1228 06:42:07.975434  640732 status.go:371] ha-331080-m04 host status = "Stopped" (err=<nil>)
	I1228 06:42:07.975456  640732 status.go:384] host is not running, skipping remaining checks
	I1228 06:42:07.975480  640732 status.go:176] ha-331080-m04 status: &{Name:ha-331080-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (54.744174651s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 node add --control-plane --alsologtostderr -v 5
E1228 06:43:10.609768  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-331080 node add --control-plane --alsologtostderr -v 5: (38.414389101s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-331080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-228296 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-228296 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (39.069054842s)
--- PASS: TestJSONOutput/start/Command (39.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-228296 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-228296 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-228296 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-228296 --output=json --user=testUser: (5.861984031s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-271107 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-271107 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.98937ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"819ce59d-bad3-4780-a29e-df54fb75d7c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-271107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9f86a96-037f-4751-a912-641025a248c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"48a587cb-66c9-407e-a78a-0308db71da17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc039f99-c86c-4013-adae-7f3afccf911c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig"}}
	{"specversion":"1.0","id":"a829d807-565c-4c97-b2a6-f0fe0383ba14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube"}}
	{"specversion":"1.0","id":"81dca220-7501-4f22-9285-829ec2cc1515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"70b3e7e7-691d-41c8-a5ba-03659d856724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d7c7e15-0557-4801-b328-cb9e610b6081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-271107" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-271107
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-797811 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-797811 --network=: (29.835460894s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-797811" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-797811
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-797811: (2.109204603s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.97s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-026270 --network=bridge
E1228 06:45:26.766403  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-026270 --network=bridge: (20.262186012s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-026270" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-026270
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-026270: (1.99165243s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.27s)

                                                
                                    
x
+
TestKicExistingNetwork (22.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1228 06:45:37.497674  555878 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 06:45:37.514053  555878 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 06:45:37.514129  555878 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1228 06:45:37.514145  555878 cli_runner.go:164] Run: docker network inspect existing-network
W1228 06:45:37.530687  555878 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1228 06:45:37.530715  555878 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1228 06:45:37.530731  555878 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1228 06:45:37.530841  555878 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 06:45:37.547536  555878 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31e8897025f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:2c:f5:84:90:06} reservation:<nil>}
I1228 06:45:37.547904  555878 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001abf2d0}
I1228 06:45:37.547937  555878 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1228 06:45:37.547987  555878 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1228 06:45:37.594604  555878 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-630562 --network=existing-network
E1228 06:45:48.588549  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:45:54.455376  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-630562 --network=existing-network: (20.747601283s)
helpers_test.go:176: Cleaning up "existing-network-630562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-630562
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-630562: (2.022930086s)
I1228 06:46:00.382082  555878 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.90s)

                                                
                                    
x
+
TestKicCustomSubnet (20.7s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-654569 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-654569 --subnet=192.168.60.0/24: (18.552871477s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-654569 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-654569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-654569
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-654569: (2.124232887s)
--- PASS: TestKicCustomSubnet (20.70s)

                                                
                                    
x
+
TestKicStaticIP (23.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-451070 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-451070 --static-ip=192.168.200.200: (20.892023076s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-451070 ip
helpers_test.go:176: Cleaning up "static-ip-451070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-451070
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-451070: (2.135792181s)
--- PASS: TestKicStaticIP (23.17s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (42.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-261048 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-261048 --driver=docker  --container-runtime=containerd: (16.376355949s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-263821 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-263821 --driver=docker  --container-runtime=containerd: (19.992618995s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-261048
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-263821
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-263821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-263821
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-263821: (2.32339415s)
helpers_test.go:176: Cleaning up "first-261048" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-261048
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-261048: (2.332260692s)
--- PASS: TestMinikubeProfile (42.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-174296 --memory=3072 --mount-string /tmp/TestMountStartserial3384010725/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-174296 --memory=3072 --mount-string /tmp/TestMountStartserial3384010725/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.427873748s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-174296 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-188862 --memory=3072 --mount-string /tmp/TestMountStartserial3384010725/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-188862 --memory=3072 --mount-string /tmp/TestMountStartserial3384010725/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.414355357s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-174296 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-174296 --alsologtostderr -v=5: (1.649772963s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-188862
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-188862: (1.259494017s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-188862
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-188862: (6.601173417s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-933034 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-933034 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.401242488s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-933034 -- rollout status deployment/busybox: (3.675971146s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-2bfcd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-997vj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-2bfcd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-997vj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-2bfcd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-997vj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-2bfcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-2bfcd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-997vj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-933034 -- exec busybox-769dd8b7dd-997vj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-933034 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-933034 -v=5 --alsologtostderr: (27.588619392s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-933034 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp testdata/cp-test.txt multinode-933034:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile137560016/001/cp-test_multinode-933034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034:/home/docker/cp-test.txt multinode-933034-m02:/home/docker/cp-test_multinode-933034_multinode-933034-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test_multinode-933034_multinode-933034-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034:/home/docker/cp-test.txt multinode-933034-m03:/home/docker/cp-test_multinode-933034_multinode-933034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test_multinode-933034_multinode-933034-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp testdata/cp-test.txt multinode-933034-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile137560016/001/cp-test_multinode-933034-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m02:/home/docker/cp-test.txt multinode-933034:/home/docker/cp-test_multinode-933034-m02_multinode-933034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test_multinode-933034-m02_multinode-933034.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m02:/home/docker/cp-test.txt multinode-933034-m03:/home/docker/cp-test_multinode-933034-m02_multinode-933034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test_multinode-933034-m02_multinode-933034-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp testdata/cp-test.txt multinode-933034-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile137560016/001/cp-test_multinode-933034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m03:/home/docker/cp-test.txt multinode-933034:/home/docker/cp-test_multinode-933034-m03_multinode-933034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034 "sudo cat /home/docker/cp-test_multinode-933034-m03_multinode-933034.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 cp multinode-933034-m03:/home/docker/cp-test.txt multinode-933034-m02:/home/docker/cp-test_multinode-933034-m03_multinode-933034-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 ssh -n multinode-933034-m02 "sudo cat /home/docker/cp-test_multinode-933034-m03_multinode-933034-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-933034 node stop m03: (1.263517854s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-933034 status: exit status 7 (489.589151ms)

                                                
                                                
-- stdout --
	multinode-933034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-933034-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-933034-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr: exit status 7 (494.008914ms)

                                                
                                                
-- stdout --
	multinode-933034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-933034-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-933034-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:49:44.177346  702596 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:49:44.177602  702596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:44.177611  702596 out.go:374] Setting ErrFile to fd 2...
	I1228 06:49:44.177615  702596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:44.177788  702596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:49:44.177964  702596 out.go:368] Setting JSON to false
	I1228 06:49:44.177989  702596 mustload.go:66] Loading cluster: multinode-933034
	I1228 06:49:44.178115  702596 notify.go:221] Checking for updates...
	I1228 06:49:44.178368  702596 config.go:182] Loaded profile config "multinode-933034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:49:44.178387  702596 status.go:174] checking status of multinode-933034 ...
	I1228 06:49:44.178798  702596 cli_runner.go:164] Run: docker container inspect multinode-933034 --format={{.State.Status}}
	I1228 06:49:44.198268  702596 status.go:371] multinode-933034 host status = "Running" (err=<nil>)
	I1228 06:49:44.198288  702596 host.go:66] Checking if "multinode-933034" exists ...
	I1228 06:49:44.198538  702596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933034
	I1228 06:49:44.217391  702596 host.go:66] Checking if "multinode-933034" exists ...
	I1228 06:49:44.217658  702596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:49:44.217697  702596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933034
	I1228 06:49:44.234960  702596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/multinode-933034/id_rsa Username:docker}
	I1228 06:49:44.323000  702596 ssh_runner.go:195] Run: systemctl --version
	I1228 06:49:44.330230  702596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:49:44.342627  702596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:49:44.397941  702596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-28 06:49:44.387964652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:49:44.398504  702596 kubeconfig.go:125] found "multinode-933034" server: "https://192.168.67.2:8443"
	I1228 06:49:44.398543  702596 api_server.go:166] Checking apiserver status ...
	I1228 06:49:44.398582  702596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:49:44.410869  702596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1316/cgroup
	I1228 06:49:44.419196  702596 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1316/cgroup
	I1228 06:49:44.426758  702596 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd44f71e0fb371881d6058c0d85c718.slice/cri-containerd-62c65cd06b94f12ad64f21236b1b68a577c861354f6a05ade5024f4c0a209d27.scope/cgroup.freeze
	I1228 06:49:44.434262  702596 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1228 06:49:44.438494  702596 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1228 06:49:44.438515  702596 status.go:463] multinode-933034 apiserver status = Running (err=<nil>)
	I1228 06:49:44.438537  702596 status.go:176] multinode-933034 status: &{Name:multinode-933034 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:49:44.438556  702596 status.go:174] checking status of multinode-933034-m02 ...
	I1228 06:49:44.438780  702596 cli_runner.go:164] Run: docker container inspect multinode-933034-m02 --format={{.State.Status}}
	I1228 06:49:44.456529  702596 status.go:371] multinode-933034-m02 host status = "Running" (err=<nil>)
	I1228 06:49:44.456552  702596 host.go:66] Checking if "multinode-933034-m02" exists ...
	I1228 06:49:44.456928  702596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933034-m02
	I1228 06:49:44.473605  702596 host.go:66] Checking if "multinode-933034-m02" exists ...
	I1228 06:49:44.473879  702596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:49:44.473917  702596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933034-m02
	I1228 06:49:44.491624  702596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22352-552174/.minikube/machines/multinode-933034-m02/id_rsa Username:docker}
	I1228 06:49:44.580608  702596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:49:44.593045  702596 status.go:176] multinode-933034-m02 status: &{Name:multinode-933034-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:49:44.593075  702596 status.go:174] checking status of multinode-933034-m03 ...
	I1228 06:49:44.593353  702596 cli_runner.go:164] Run: docker container inspect multinode-933034-m03 --format={{.State.Status}}
	I1228 06:49:44.610441  702596 status.go:371] multinode-933034-m03 host status = "Stopped" (err=<nil>)
	I1228 06:49:44.610464  702596 status.go:384] host is not running, skipping remaining checks
	I1228 06:49:44.610470  702596 status.go:176] multinode-933034-m03 status: &{Name:multinode-933034-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-933034 node start m03 -v=5 --alsologtostderr: (6.102713246s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-933034
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-933034
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-933034: (24.910422407s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-933034 --wait=true -v=5 --alsologtostderr
E1228 06:50:26.767277  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:48.588129  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-933034 --wait=true -v=5 --alsologtostderr: (45.68075789s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-933034
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-933034 node delete m03: (4.678655163s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-933034 stop: (23.703327038s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-933034 status: exit status 7 (95.96066ms)

                                                
                                                
-- stdout --
	multinode-933034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-933034-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr: exit status 7 (94.630315ms)

                                                
                                                
-- stdout --
	multinode-933034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-933034-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:51:31.271871  712240 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:51:31.272112  712240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:31.272120  712240 out.go:374] Setting ErrFile to fd 2...
	I1228 06:51:31.272124  712240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:31.272353  712240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:51:31.272535  712240 out.go:368] Setting JSON to false
	I1228 06:51:31.272559  712240 mustload.go:66] Loading cluster: multinode-933034
	I1228 06:51:31.272613  712240 notify.go:221] Checking for updates...
	I1228 06:51:31.272895  712240 config.go:182] Loaded profile config "multinode-933034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:51:31.272909  712240 status.go:174] checking status of multinode-933034 ...
	I1228 06:51:31.273348  712240 cli_runner.go:164] Run: docker container inspect multinode-933034 --format={{.State.Status}}
	I1228 06:51:31.291443  712240 status.go:371] multinode-933034 host status = "Stopped" (err=<nil>)
	I1228 06:51:31.291490  712240 status.go:384] host is not running, skipping remaining checks
	I1228 06:51:31.291500  712240 status.go:176] multinode-933034 status: &{Name:multinode-933034 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:51:31.291557  712240 status.go:174] checking status of multinode-933034-m02 ...
	I1228 06:51:31.291871  712240 cli_runner.go:164] Run: docker container inspect multinode-933034-m02 --format={{.State.Status}}
	I1228 06:51:31.307697  712240 status.go:371] multinode-933034-m02 host status = "Stopped" (err=<nil>)
	I1228 06:51:31.307721  712240 status.go:384] host is not running, skipping remaining checks
	I1228 06:51:31.307730  712240 status.go:176] multinode-933034-m02 status: &{Name:multinode-933034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-933034 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1228 06:52:11.635433  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-933034 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.082551597s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-933034 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-933034
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-933034-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-933034-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.522763ms)

                                                
                                                
-- stdout --
	* [multinode-933034-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-933034-m02' is duplicated with machine name 'multinode-933034-m02' in profile 'multinode-933034'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-933034-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-933034-m03 --driver=docker  --container-runtime=containerd: (17.547573042s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-933034
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-933034: exit status 80 (304.72473ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-933034 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-933034-m03 already exists in multinode-933034-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-933034-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-933034-m03: (2.31904108s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                    
x
+
TestScheduledStopUnix (91.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-485831 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-485831 --memory=3072 --driver=docker  --container-runtime=containerd: (15.833246318s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485831 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:53:04.403172  723050 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:53:04.403660  723050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:04.403672  723050 out.go:374] Setting ErrFile to fd 2...
	I1228 06:53:04.403678  723050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:04.403858  723050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:53:04.404112  723050 out.go:368] Setting JSON to false
	I1228 06:53:04.404242  723050 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:04.404556  723050 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:53:04.404636  723050 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/scheduled-stop-485831/config.json ...
	I1228 06:53:04.404835  723050 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:04.404965  723050 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-485831 -n scheduled-stop-485831
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485831 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:53:04.806144  723207 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:53:04.806281  723207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:04.806293  723207 out.go:374] Setting ErrFile to fd 2...
	I1228 06:53:04.806300  723207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:04.806534  723207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:53:04.806798  723207 out.go:368] Setting JSON to false
	I1228 06:53:04.807026  723207 daemonize_unix.go:73] killing process 723086 as it is an old scheduled stop
	I1228 06:53:04.807144  723207 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:04.807539  723207 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:53:04.807639  723207 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/scheduled-stop-485831/config.json ...
	I1228 06:53:04.807849  723207 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:04.807991  723207 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1228 06:53:04.813381  555878 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/scheduled-stop-485831/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485831 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485831 -n scheduled-stop-485831
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-485831
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485831 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:53:30.728914  724106 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:53:30.730036  724106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:30.730048  724106 out.go:374] Setting ErrFile to fd 2...
	I1228 06:53:30.730052  724106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:53:30.730324  724106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:53:30.730607  724106 out.go:368] Setting JSON to false
	I1228 06:53:30.730705  724106 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:30.731090  724106 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:53:30.731185  724106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/scheduled-stop-485831/config.json ...
	I1228 06:53:30.731403  724106 mustload.go:66] Loading cluster: scheduled-stop-485831
	I1228 06:53:30.731519  724106 config.go:182] Loaded profile config "scheduled-stop-485831": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-485831
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-485831: exit status 7 (82.140846ms)

                                                
                                                
-- stdout --
	scheduled-stop-485831
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485831 -n scheduled-stop-485831
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485831 -n scheduled-stop-485831: exit status 7 (80.045198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-485831" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-485831
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-485831: (4.543866835s)
--- PASS: TestScheduledStopUnix (91.96s)

                                                
                                    
x
+
TestInsufficientStorage (11.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-790065 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-790065 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.011658092s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83c1da44-fff4-4d19-ad8a-c666422b0e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-790065] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81d436c3-132e-4789-a415-a48beefe5734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"538d708e-9285-467b-be17-5326101e368b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67bafdaf-a27c-4e70-af83-92ad6efb6388","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig"}}
	{"specversion":"1.0","id":"e76a7696-7410-40bc-b7bd-d7789d9a3a58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube"}}
	{"specversion":"1.0","id":"ae251b68-13f2-45a4-b053-c3c61f3354f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1e38143d-4e15-4ba4-90af-f88cce233a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46a5dd7c-a9d9-45b5-af45-34c7b8804cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3a93e1b6-f9a6-44cd-a4f1-3e1ded64fea2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fc0abf99-db5a-4dc7-8dd6-1e233351443b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ec2ff61-86a0-45a6-b606-0743097645ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"568bc323-c089-410c-92f8-e65caec3e5a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-790065\" primary control-plane node in \"insufficient-storage-790065\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e76973ea-2810-4e32-8a7f-d106876bd88f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766884053-22351 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc098df1-8d36-413d-9049-31eb007b8100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e07316cf-ad0f-4441-a151-657f2e1e7ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-790065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-790065 --output=json --layout=cluster: exit status 7 (287.085938ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-790065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-790065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:54:29.739777  726352 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-790065" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-790065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-790065 --output=json --layout=cluster: exit status 7 (275.437084ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-790065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-790065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:54:30.015799  726466 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-790065" does not appear in /home/jenkins/minikube-integration/22352-552174/kubeconfig
	E1228 06:54:30.026539  726466 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/insufficient-storage-790065/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-790065" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-790065
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-790065: (1.853115336s)
--- PASS: TestInsufficientStorage (11.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3333165371 start -p running-upgrade-397849 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3333165371 start -p running-upgrade-397849 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (55.443188199s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-397849 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-397849 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.638668487s)
helpers_test.go:176: Cleaning up "running-upgrade-397849" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-397849
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-397849: (2.833044856s)
--- PASS: TestRunningBinaryUpgrade (70.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (312.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.436814794s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-926675 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-926675 --alsologtostderr: (1.317112238s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-926675 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-926675 status --format={{.Host}}: exit status 7 (93.798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m29.681196633s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-926675 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (86.203358ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-926675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-926675
	    minikube start -p kubernetes-upgrade-926675 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9266752 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-926675 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926675 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.323998787s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-926675" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-926675
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-926675: (3.948493281s)
--- PASS: TestKubernetesUpgrade (312.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (80.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E1228 06:55:48.588377  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/addons-704221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.29098649 start -p missing-upgrade-317261 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.29098649 start -p missing-upgrade-317261 --memory=3072 --driver=docker  --container-runtime=containerd: (24.272417889s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-317261
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-317261: (2.633347675s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-317261
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-317261 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-317261 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.855324679s)
helpers_test.go:176: Cleaning up "missing-upgrade-317261" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-317261
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-317261: (1.90265805s)
--- PASS: TestMissingContainerUpgrade (80.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-258759 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-258759 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (1.298202463s)

                                                
                                                
-- stdout --
	* [false-258759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:54:38.014132  728585 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:54:38.014442  728585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:54:38.014455  728585 out.go:374] Setting ErrFile to fd 2...
	I1228 06:54:38.014459  728585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:54:38.014641  728585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-552174/.minikube/bin
	I1228 06:54:38.015127  728585 out.go:368] Setting JSON to false
	I1228 06:54:38.016160  728585 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13022,"bootTime":1766891856,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:54:38.016236  728585 start.go:143] virtualization: kvm guest
	I1228 06:54:38.118504  728585 out.go:179] * [false-258759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:54:38.219784  728585 notify.go:221] Checking for updates...
	I1228 06:54:38.219793  728585 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:54:38.286819  728585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:54:38.302155  728585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	I1228 06:54:38.306139  728585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	I1228 06:54:38.373240  728585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:54:38.458739  728585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:54:38.589130  728585 config.go:182] Loaded profile config "force-systemd-flag-438914": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:54:38.589322  728585 config.go:182] Loaded profile config "offline-containerd-377947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:54:38.589463  728585 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:54:38.615167  728585 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:54:38.615330  728585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:54:38.672552  728585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-28 06:54:38.66238677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:54:38.672741  728585 docker.go:319] overlay module found
	I1228 06:54:38.825496  728585 out.go:179] * Using the docker driver based on user configuration
	I1228 06:54:38.957989  728585 start.go:309] selected driver: docker
	I1228 06:54:38.958015  728585 start.go:928] validating driver "docker" against <nil>
	I1228 06:54:38.958030  728585 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:54:39.021253  728585 out.go:203] 
	W1228 06:54:39.104561  728585 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1228 06:54:39.187879  728585 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-258759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-258759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258759"

                                                
                                                
----------------------- debugLogs end: false-258759 [took: 3.904541418s] --------------------------------
helpers_test.go:176: Cleaning up "false-258759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-258759
--- PASS: TestNetworkPlugins/group/false (5.39s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (62.72s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-517921 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-517921 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (54.179091045s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-517921 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-517921 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (1.718723585s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-517921
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-517921: (6.823360945s)
--- PASS: TestPreload/Start-NoPreload-PullImage (62.72s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (52.2s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-517921 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1228 06:56:49.815854  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-517921 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.967336471s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-517921 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (52.20s)

                                                
                                    
x
+
TestPause/serial/Start (38.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327044 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-327044 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (38.993053738s)
--- PASS: TestPause/serial/Start (38.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (78.792826ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-875069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-552174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-552174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.954227127s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875069 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327044 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-327044 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.756470978s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (2.629996842s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875069 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-875069 status -o json: exit status 2 (369.272408ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-875069","Host":"Running","Kubelet":"Stopped","APIServer":"Running","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-875069
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-875069: (2.009012932s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327044 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (3.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (3.764082411s)
--- PASS: TestNoKubernetes/serial/Start (3.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22352-552174/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.01675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-875069
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-875069: (1.286756115s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (285.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1847006401 start -p stopped-upgrade-153407 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1847006401 start -p stopped-upgrade-153407 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (19.978114977s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1847006401 -p stopped-upgrade-153407 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1847006401 -p stopped-upgrade-153407 stop: (1.255964282s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-153407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-153407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m24.755489308s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (285.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875069 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875069 --driver=docker  --container-runtime=containerd: (6.51121763s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.194586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (43.738540152s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (41.502919673s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-258759 "pgrep -a kubelet"
I1228 06:58:54.982405  555878 config.go:182] Loaded profile config "auto-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-m7gpx" [92030e87-82c0-4e49-9e2b-0cf72534ed91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-m7gpx" [92030e87-82c0-4e49-9e2b-0cf72534ed91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003560042s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-nkcc9" [dcbf50e3-65d9-4af1-b35e-ae7eafd93cad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003576133s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-258759 "pgrep -a kubelet"
I1228 06:59:13.427309  555878 config.go:182] Loaded profile config "kindnet-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bn55n" [7263ea31-1732-4ae4-8714-f95722b22ef3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bn55n" [7263ea31-1732-4ae4-8714-f95722b22ef3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004345989s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (48.840709183s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.561835686s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-q4dll" [c7e7522b-7570-4fe4-b693-082709528ce8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004594306s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-258759 "pgrep -a kubelet"
I1228 07:00:17.583989  555878 config.go:182] Loaded profile config "calico-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-r9pmd" [49f9d9f1-6342-46c2-83da-6c259e88af6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-r9pmd" [49f9d9f1-6342-46c2-83da-6c259e88af6b] Running
E1228 07:00:26.767007  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/functional-933591/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004230644s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-258759 "pgrep -a kubelet"
I1228 07:00:34.516288  555878 config.go:182] Loaded profile config "custom-flannel-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gp5hp" [4608adf9-5dc0-4880-b22a-c4be8299f2df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gp5hp" [4608adf9-5dc0-4880-b22a-c4be8299f2df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004716677s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m2.258677563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (48.622596229s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-258759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m5.795587591s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-kkbgg" [57c7e1d6-968f-4e85-a44d-21fbe95f9069] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004460818s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-258759 "pgrep -a kubelet"
I1228 07:01:49.790611  555878 config.go:182] Loaded profile config "flannel-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-kz5ww" [2e23ed6e-d9b2-41dc-86d1-fab8f1ebb1f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-kz5ww" [2e23ed6e-d9b2-41dc-86d1-fab8f1ebb1f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003447249s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-258759 "pgrep -a kubelet"
I1228 07:01:51.644479  555878 config.go:182] Loaded profile config "enable-default-cni-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-sjjtw" [8f8f816e-9f47-4a00-abd5-1da6f4974741] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-sjjtw" [8f8f816e-9f47-4a00-abd5-1da6f4974741] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004059519s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-258759 "pgrep -a kubelet"
I1228 07:02:11.510545  555878 config.go:182] Loaded profile config "bridge-258759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-258759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-f7blp" [3b301eca-9c9e-45e7-b1ff-bfec161aa3f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-f7blp" [3b301eca-9c9e-45e7-b1ff-bfec161aa3f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004461456s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-258759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-258759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.056579866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.007055544s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (42.693681611s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-153407
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (42.588780472s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-456925 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7af0b52c-4f85-4102-b0aa-e37b3ae63c48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7af0b52c-4f85-4102-b0aa-e37b3ae63c48] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004011652s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-456925 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-805353 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d9d29e50-d42e-4587-a782-865a82530db0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d9d29e50-d42e-4587-a782-865a82530db0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004147702s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-805353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-456925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-456925 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-456925 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-456925 --alsologtostderr -v=3: (12.096883103s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-805353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-805353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-805353 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-805353 --alsologtostderr -v=3: (12.03237987s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-982151 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0773e0ca-0392-4264-b667-1d40aa0ad313] Pending
helpers_test.go:353: "busybox" [0773e0ca-0392-4264-b667-1d40aa0ad313] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0773e0ca-0392-4264-b667-1d40aa0ad313] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003176535s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-982151 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456925 -n no-preload-456925
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456925 -n no-preload-456925: exit status 7 (81.917364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-456925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.907091116s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456925 -n no-preload-456925
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805353 -n old-k8s-version-805353
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805353 -n old-k8s-version-805353: exit status 7 (93.488903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-805353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-805353 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.190606133s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805353 -n old-k8s-version-805353
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [063a7b47-e672-4e01-a6c9-e464988fdae9] Pending
helpers_test.go:353: "busybox" [063a7b47-e672-4e01-a6c9-e464988fdae9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [063a7b47-e672-4e01-a6c9-e464988fdae9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004147919s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-982151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-982151 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-982151 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-982151 --alsologtostderr -v=3: (12.071196998s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-129908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-129908 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-129908 --alsologtostderr -v=3: (12.5073961s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-982151 -n embed-certs-982151
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-982151 -n embed-certs-982151: exit status 7 (91.492203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-982151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1228 07:03:55.171805  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:55.177034  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:55.187354  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:55.207901  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:55.249066  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:55.330039  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-982151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (59.62596597s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-982151 -n embed-certs-982151
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
E1228 07:03:55.490339  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908: exit status 7 (105.276237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-129908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1228 07:03:55.811349  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:56.452121  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:03:57.733080  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:00.293992  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:05.414393  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.117471  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.122744  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.133048  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.153299  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.193578  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.274634  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.435078  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:07.755318  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:08.395966  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:09.676901  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:12.237730  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:15.655376  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:04:17.358521  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-129908 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.470783667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-129908 -n default-k8s-diff-port-129908
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cvlw8" [9abeb745-37f0-4456-9c98-cccdd60e1dbb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003581498s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wngnn" [d601191e-5f70-4d10-9a9b-58c41c86d1d4] Running
E1228 07:04:27.599070  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003528881s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cvlw8" [9abeb745-37f0-4456-9c98-cccdd60e1dbb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003721463s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-456925 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wngnn" [d601191e-5f70-4d10-9a9b-58c41c86d1d4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003819899s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-805353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-456925 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-805353 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (25.138524049s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.14s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (12.92s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-287055 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (12.688171565s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-287055" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-287055
--- PASS: TestPreload/PreloadSrc/gcs (12.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-flkqr" [d4dc9691-6198-4b27-9473-7cc830123b9b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003022771s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vkxng" [ef1a8341-86ee-4b9c-b644-390515189555] Running
E1228 07:04:48.079750  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/kindnet-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002887854s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-flkqr" [d4dc9691-6198-4b27-9473-7cc830123b9b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00377525s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-129908 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vkxng" [ef1a8341-86ee-4b9c-b644-390515189555] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004162803s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-982151 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-129908 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (9.26s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-941249 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (8.964795027s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-941249" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-941249
--- PASS: TestPreload/PreloadSrc/github (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-982151 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.97s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-832630 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-832630" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-832630
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-190777 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-190777 --alsologtostderr -v=3: (1.448546912s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190777 -n newest-cni-190777
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190777 -n newest-cni-190777: exit status 7 (80.84338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-190777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1228 07:05:11.255171  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.260525  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.270780  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.291041  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.331313  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.411656  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.572112  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:11.893210  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:12.534203  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:13.815246  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:16.375788  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/calico-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:05:17.096693  555878 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-552174/.minikube/profiles/auto-258759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190777 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (9.297624003s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190777 -n newest-cni-190777
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (9.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190777 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-258759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-258759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258759"

                                                
                                                
----------------------- debugLogs end: kubenet-258759 [took: 5.49689623s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-258759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-258759
--- SKIP: TestNetworkPlugins/group/kubenet (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-258759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-258759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-258759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-258759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258759"

                                                
                                                
----------------------- debugLogs end: cilium-258759 [took: 4.422476622s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-258759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-258759
--- SKIP: TestNetworkPlugins/group/cilium (4.63s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-284795" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-284795
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard