Test Report: Docker_Linux_containerd 21923

                    
                      0ff1edca1acc03f8c3eb691c9cf9caebdbe6133d:2025-11-20:42417
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 12.86
306 TestStartStop/group/no-preload/serial/DeployApp 12.68
327 TestStartStop/group/embed-certs/serial/DeployApp 12.21
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.23
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-715005 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3a1d0e8f-ce19-4ac1-bea8-96d6e879131e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3a1d0e8f-ce19-4ac1-bea8-96d6e879131e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003298168s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-715005 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-715005
helpers_test.go:243: (dbg) docker inspect old-k8s-version-715005:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010",
	        "Created": "2025-11-20T20:51:55.667724791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239626,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:51:55.707980557Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/hosts",
	        "LogPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010-json.log",
	        "Name": "/old-k8s-version-715005",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-715005:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-715005",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010",
	                "LowerDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-715005",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-715005/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-715005",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-715005",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-715005",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "519c5740cd96610162543e0478f357a1c40858a76bf4bd954d93058851e4b011",
	            "SandboxKey": "/var/run/docker/netns/519c5740cd96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-715005": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "369c4674dc51430afa443de03112fdde075c05b6373a2c857451d35a88c6b5e1",
	                    "EndpointID": "19dc54360de78ea08cefba6f708fa345c1a326c6b8006456f8533a57f821b980",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:06:c9:9d:66:1e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-715005",
	                        "3b6a20512ce5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-715005 -n old-k8s-version-715005
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-715005 logs -n 25
E1120 20:52:48.808419    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-715005 logs -n 25: (1.022263083s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-876657 sudo crio config                                                                                                                                                                                                                   │ cilium-876657             │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │                     │
	│ delete  │ -p cilium-876657                                                                                                                                                                                                                                    │ cilium-876657             │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-137718    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p stopped-upgrade-058944                                                                                                                                                                                                                           │ stopped-upgrade-058944    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p missing-upgrade-670521                                                                                                                                                                                                                           │ missing-upgrade-670521    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ stop    │ -p kubernetes-upgrade-902531                                                                                                                                                                                                                        │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ force-systemd-flag-431737 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p force-systemd-flag-431737                                                                                                                                                                                                                        │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ cert-options-636195 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ -p cert-options-636195 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ delete  │ -p cert-options-636195                                                                                                                                                                                                                              │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337         │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:52:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:52:08.252448  242858 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:52:08.252562  242858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:08.252570  242858 out.go:374] Setting ErrFile to fd 2...
	I1120 20:52:08.252576  242858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:08.252753  242858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:52:08.253282  242858 out.go:368] Setting JSON to false
	I1120 20:52:08.254779  242858 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2080,"bootTime":1763669848,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:52:08.254847  242858 start.go:143] virtualization: kvm guest
	I1120 20:52:08.256503  242858 out.go:179] * [no-preload-480337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:52:08.258025  242858 notify.go:221] Checking for updates...
	I1120 20:52:08.258048  242858 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:52:08.260128  242858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:52:08.261508  242858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:08.262712  242858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:52:08.263964  242858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:52:08.265480  242858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:52:08.267315  242858 config.go:182] Loaded profile config "cert-expiration-137718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:08.267441  242858 config.go:182] Loaded profile config "kubernetes-upgrade-902531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:08.267541  242858 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:52:08.267634  242858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:52:08.298259  242858 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:52:08.298399  242858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:08.367035  242858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-20 20:52:08.353260141 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:08.367188  242858 docker.go:319] overlay module found
	I1120 20:52:08.368888  242858 out.go:179] * Using the docker driver based on user configuration
	I1120 20:52:08.370134  242858 start.go:309] selected driver: docker
	I1120 20:52:08.370149  242858 start.go:930] validating driver "docker" against <nil>
	I1120 20:52:08.370160  242858 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:52:08.370935  242858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:08.436760  242858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-20 20:52:08.425987757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:08.436947  242858 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:52:08.437244  242858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:08.438740  242858 out.go:179] * Using Docker driver with root privileges
	I1120 20:52:08.439836  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:08.439894  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:08.439908  242858 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:52:08.439975  242858 start.go:353] cluster config:
	{Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:08.441167  242858 out.go:179] * Starting "no-preload-480337" primary control-plane node in "no-preload-480337" cluster
	I1120 20:52:08.442267  242858 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:52:08.443897  242858 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:08.445359  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:52:08.445439  242858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:08.445494  242858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json ...
	I1120 20:52:08.445524  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json: {Name:mk67fe584bdd61e7dc470a4845c1a48d09ae85c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:08.445659  242858 cache.go:107] acquiring lock: {Name:mk3ea08bf43a5d2bac31f44c4411f5077815f926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445701  242858 cache.go:107] acquiring lock: {Name:mk452f143f3760942acee0a1afa340e79fb15acb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445754  242858 cache.go:115] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 20:52:08.445726  242858 cache.go:107] acquiring lock: {Name:mk32e408a68e033995572d30bae912b78d78fdd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445767  242858 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.56µs
	I1120 20:52:08.445785  242858 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 20:52:08.445769  242858 cache.go:107] acquiring lock: {Name:mk8589edbfed330a1ddb51d34e55cf4f6dba2585 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445801  242858 cache.go:107] acquiring lock: {Name:mka8113f9113c7cf8c73b708b8e0c0e4338b0522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445815  242858 cache.go:107] acquiring lock: {Name:mkf92c975c475c307d4c631b384e242552425a97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445804  242858 cache.go:107] acquiring lock: {Name:mkace5f2fd3da1bb55f21aaae93deda29f684d06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445847  242858 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:08.445836  242858 cache.go:107] acquiring lock: {Name:mk1cd73325d398de1f9fcd7c35b741773c7770b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445897  242858 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:08.445912  242858 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:08.446005  242858 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:08.446020  242858 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:08.446023  242858 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:08.446081  242858 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 20:52:08.447286  242858 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:08.447293  242858 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:08.447287  242858 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 20:52:08.447406  242858 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:08.447417  242858 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:08.447444  242858 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:08.447421  242858 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:08.470469  242858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:08.470487  242858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:08.470502  242858 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:08.470529  242858 start.go:360] acquireMachinesLock for no-preload-480337: {Name:mk38ae0cd7f919fa42a7cfea565c7e28ffc15120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.470636  242858 start.go:364] duration metric: took 88.862µs to acquireMachinesLock for "no-preload-480337"
	I1120 20:52:08.470665  242858 start.go:93] Provisioning new machine with config: &{Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:08.470767  242858 start.go:125] createHost starting for "" (driver="docker")
	I1120 20:52:08.750662  238148 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502666 seconds
	I1120 20:52:08.750868  238148 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:52:08.765060  238148 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:52:09.291266  238148 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:52:09.291589  238148 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-715005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:52:09.814822  238148 kubeadm.go:319] [bootstrap-token] Using token: 16hbch.nehrzw8ak789mtyt
	I1120 20:52:06.090049  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:06.090086  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:06.535613  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:48474->192.168.103.2:8443: read: connection reset by peer
	I1120 20:52:06.586879  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:06.587349  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:07.087076  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:07.087656  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:07.587315  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:09.816352  238148 out.go:252]   - Configuring RBAC rules ...
	I1120 20:52:09.816506  238148 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:52:09.826772  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:52:09.836932  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:52:09.849779  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:52:09.861350  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:52:09.943146  238148 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:52:09.975570  238148 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:52:10.239991  238148 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:52:10.273516  238148 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:52:10.274683  238148 kubeadm.go:319] 
	I1120 20:52:10.274779  238148 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:52:10.274786  238148 kubeadm.go:319] 
	I1120 20:52:10.274929  238148 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:52:10.274954  238148 kubeadm.go:319] 
	I1120 20:52:10.274986  238148 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:52:10.275062  238148 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:52:10.275128  238148 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:52:10.275142  238148 kubeadm.go:319] 
	I1120 20:52:10.275209  238148 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:52:10.275217  238148 kubeadm.go:319] 
	I1120 20:52:10.275277  238148 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:52:10.275286  238148 kubeadm.go:319] 
	I1120 20:52:10.275356  238148 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:52:10.275540  238148 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:52:10.275662  238148 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:52:10.275671  238148 kubeadm.go:319] 
	I1120 20:52:10.275820  238148 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:52:10.275972  238148 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:52:10.275996  238148 kubeadm.go:319] 
	I1120 20:52:10.276126  238148 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 16hbch.nehrzw8ak789mtyt \
	I1120 20:52:10.276260  238148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f \
	I1120 20:52:10.276290  238148 kubeadm.go:319] 	--control-plane 
	I1120 20:52:10.276294  238148 kubeadm.go:319] 
	I1120 20:52:10.276478  238148 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:52:10.276490  238148 kubeadm.go:319] 
	I1120 20:52:10.276600  238148 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 16hbch.nehrzw8ak789mtyt \
	I1120 20:52:10.276726  238148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f 
	I1120 20:52:10.278899  238148 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:52:10.279067  238148 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:52:10.279103  238148 cni.go:84] Creating CNI manager for ""
	I1120 20:52:10.279116  238148 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:10.280874  238148 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:52:08.472716  242858 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 20:52:08.472973  242858 start.go:159] libmachine.API.Create for "no-preload-480337" (driver="docker")
	I1120 20:52:08.473007  242858 client.go:173] LocalClient.Create starting
	I1120 20:52:08.473093  242858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem
	I1120 20:52:08.473142  242858 main.go:143] libmachine: Decoding PEM data...
	I1120 20:52:08.473162  242858 main.go:143] libmachine: Parsing certificate...
	I1120 20:52:08.473234  242858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem
	I1120 20:52:08.473265  242858 main.go:143] libmachine: Decoding PEM data...
	I1120 20:52:08.473278  242858 main.go:143] libmachine: Parsing certificate...
	I1120 20:52:08.473719  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 20:52:08.493293  242858 cli_runner.go:211] docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 20:52:08.493382  242858 network_create.go:284] running [docker network inspect no-preload-480337] to gather additional debugging logs...
	I1120 20:52:08.493408  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337
	W1120 20:52:08.510935  242858 cli_runner.go:211] docker network inspect no-preload-480337 returned with exit code 1
	I1120 20:52:08.510962  242858 network_create.go:287] error running [docker network inspect no-preload-480337]: docker network inspect no-preload-480337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-480337 not found
	I1120 20:52:08.510973  242858 network_create.go:289] output of [docker network inspect no-preload-480337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-480337 not found
	
	** /stderr **
	I1120 20:52:08.511058  242858 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:08.530865  242858 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5a901ca622c0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:53:dd:e9:bf:88} reservation:<nil>}
	I1120 20:52:08.531757  242858 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6594e2724ba2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:e6:72:df:4b:23} reservation:<nil>}
	I1120 20:52:08.532655  242858 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b5b02f2241a6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:6b:71:15:af:34} reservation:<nil>}
	I1120 20:52:08.533472  242858 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00234b7d0}
	I1120 20:52:08.533495  242858 network_create.go:124] attempt to create docker network no-preload-480337 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 20:52:08.533546  242858 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-480337 no-preload-480337
	I1120 20:52:08.585290  242858 network_create.go:108] docker network no-preload-480337 192.168.76.0/24 created
	I1120 20:52:08.585325  242858 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-480337" container
	I1120 20:52:08.585461  242858 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 20:52:08.597984  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 20:52:08.598001  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 20:52:08.604956  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1120 20:52:08.605315  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 20:52:08.606694  242858 cli_runner.go:164] Run: docker volume create no-preload-480337 --label name.minikube.sigs.k8s.io=no-preload-480337 --label created_by.minikube.sigs.k8s.io=true
	I1120 20:52:08.618534  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 20:52:08.621471  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1120 20:52:08.624491  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 20:52:08.627070  242858 oci.go:103] Successfully created a docker volume no-preload-480337
	I1120 20:52:08.627146  242858 cli_runner.go:164] Run: docker run --rm --name no-preload-480337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-480337 --entrypoint /usr/bin/test -v no-preload-480337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 20:52:08.697577  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1120 20:52:08.697608  242858 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 251.842825ms
	I1120 20:52:08.697621  242858 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 20:52:09.049023  242858 oci.go:107] Successfully prepared a docker volume no-preload-480337
	I1120 20:52:09.049073  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1120 20:52:09.049166  242858 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 20:52:09.049225  242858 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 20:52:09.049280  242858 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 20:52:09.118960  242858 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-480337 --name no-preload-480337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-480337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-480337 --network no-preload-480337 --ip 192.168.76.2 --volume no-preload-480337:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 20:52:09.232948  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 20:52:09.232980  242858 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 787.2171ms
	I1120 20:52:09.232993  242858 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 20:52:09.453384  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Running}}
	I1120 20:52:09.472962  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:09.492439  242858 cli_runner.go:164] Run: docker exec no-preload-480337 stat /var/lib/dpkg/alternatives/iptables
	I1120 20:52:09.543100  242858 oci.go:144] the created container "no-preload-480337" has a running status.
	I1120 20:52:09.543125  242858 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa...
	I1120 20:52:09.964490  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 20:52:09.964524  242858 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.518724938s
	I1120 20:52:09.964550  242858 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 20:52:10.039948  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 20:52:10.039987  242858 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.594319275s
	I1120 20:52:10.040005  242858 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 20:52:10.093051  242858 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 20:52:10.094772  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 20:52:10.094803  242858 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.649119384s
	I1120 20:52:10.094827  242858 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 20:52:10.116809  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 20:52:10.116833  242858 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.671077498s
	I1120 20:52:10.116845  242858 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 20:52:10.123333  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:10.152798  242858 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 20:52:10.152827  242858 kic_runner.go:114] Args: [docker exec --privileged no-preload-480337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 20:52:10.201923  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:10.228150  242858 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:10.228251  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.253229  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.253812  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.253835  242858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:10.413054  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-480337
	
	I1120 20:52:10.413084  242858 ubuntu.go:182] provisioning hostname "no-preload-480337"
	I1120 20:52:10.413154  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.435191  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.435437  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.435454  242858 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-480337 && echo "no-preload-480337" | sudo tee /etc/hostname
	I1120 20:52:10.583828  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 20:52:10.583853  242858 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.138065582s
	I1120 20:52:10.583869  242858 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 20:52:10.583886  242858 cache.go:87] Successfully saved all images to host disk.
	I1120 20:52:10.588140  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-480337
	
	I1120 20:52:10.588225  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.607575  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.607874  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.607903  242858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480337/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:10.751208  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:10.751244  242858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3769/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3769/.minikube}
	I1120 20:52:10.751270  242858 ubuntu.go:190] setting up certificates
	I1120 20:52:10.751282  242858 provision.go:84] configureAuth start
	I1120 20:52:10.751356  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:10.773196  242858 provision.go:143] copyHostCerts
	I1120 20:52:10.773264  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem, removing ...
	I1120 20:52:10.773278  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem
	I1120 20:52:10.773380  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem (1082 bytes)
	I1120 20:52:10.773498  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem, removing ...
	I1120 20:52:10.773511  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem
	I1120 20:52:10.773555  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem (1123 bytes)
	I1120 20:52:10.773632  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem, removing ...
	I1120 20:52:10.773642  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem
	I1120 20:52:10.773680  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem (1679 bytes)
	I1120 20:52:10.773754  242858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem org=jenkins.no-preload-480337 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-480337]
	I1120 20:52:10.929235  242858 provision.go:177] copyRemoteCerts
	I1120 20:52:10.929300  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:10.929359  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.949497  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.046944  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:52:11.070669  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:11.092589  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 20:52:11.112476  242858 provision.go:87] duration metric: took 361.173682ms to configureAuth
	I1120 20:52:11.112506  242858 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:11.112675  242858 config.go:182] Loaded profile config "no-preload-480337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:11.112687  242858 machine.go:97] duration metric: took 884.515372ms to provisionDockerMachine
	I1120 20:52:11.112693  242858 client.go:176] duration metric: took 2.639675681s to LocalClient.Create
	I1120 20:52:11.112713  242858 start.go:167] duration metric: took 2.639742922s to libmachine.API.Create "no-preload-480337"
	I1120 20:52:11.112722  242858 start.go:293] postStartSetup for "no-preload-480337" (driver="docker")
	I1120 20:52:11.112729  242858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:11.112769  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:11.112801  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.131812  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.231672  242858 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:11.235446  242858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:11.235470  242858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:11.235483  242858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/addons for local assets ...
	I1120 20:52:11.235540  242858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/files for local assets ...
	I1120 20:52:11.235608  242858 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem -> 77312.pem in /etc/ssl/certs
	I1120 20:52:11.235694  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:11.243934  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:52:11.264631  242858 start.go:296] duration metric: took 151.893896ms for postStartSetup
	I1120 20:52:11.265034  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:11.283078  242858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json ...
	I1120 20:52:11.283417  242858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:11.283468  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.301828  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.397720  242858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:11.403267  242858 start.go:128] duration metric: took 2.932485378s to createHost
	I1120 20:52:11.403302  242858 start.go:83] releasing machines lock for "no-preload-480337", held for 2.932643607s
	I1120 20:52:11.403380  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:11.422042  242858 ssh_runner.go:195] Run: cat /version.json
	I1120 20:52:11.422066  242858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:11.422097  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.422125  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.440831  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.441126  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.585908  242858 ssh_runner.go:195] Run: systemctl --version
	I1120 20:52:11.592596  242858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:11.597797  242858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:11.597875  242858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:11.623700  242858 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:52:11.623720  242858 start.go:496] detecting cgroup driver to use...
	I1120 20:52:11.623747  242858 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:11.623815  242858 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 20:52:11.640937  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 20:52:11.655322  242858 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:11.655394  242858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:11.671618  242858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:11.689831  242858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:11.776195  242858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:11.859262  242858 docker.go:234] disabling docker service ...
	I1120 20:52:11.859326  242858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:11.877980  242858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:11.890841  242858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:11.974290  242858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:12.062916  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:12.075846  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:12.090429  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 20:52:12.101339  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 20:52:12.111481  242858 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1120 20:52:12.111532  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1120 20:52:12.120812  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:52:12.130543  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 20:52:12.140477  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:52:12.150476  242858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:12.158924  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 20:52:12.168516  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 20:52:12.177761  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 20:52:12.187179  242858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:12.194577  242858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:12.202081  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:12.283770  242858 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 20:52:12.353452  242858 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 20:52:12.353512  242858 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 20:52:12.357611  242858 start.go:564] Will wait 60s for crictl version
	I1120 20:52:12.357665  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.361283  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:52:12.386880  242858 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 20:52:12.386957  242858 ssh_runner.go:195] Run: containerd --version
	I1120 20:52:12.407763  242858 ssh_runner.go:195] Run: containerd --version
	I1120 20:52:12.431279  242858 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 20:52:12.432442  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:12.450847  242858 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 20:52:12.455064  242858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:12.465598  242858 kubeadm.go:884] updating cluster {Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:52:12.465697  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:52:12.465730  242858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:12.489581  242858 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:52:12.489601  242858 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 20:52:12.489669  242858 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:12.489678  242858 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.489688  242858 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.489706  242858 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.489730  242858 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.489710  242858 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.489760  242858 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.489762  242858 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.491060  242858 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.491091  242858 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.491094  242858 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.491120  242858 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:12.491141  242858 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.491061  242858 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.491061  242858 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.491060  242858 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.617642  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1120 20:52:12.617722  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.621253  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1120 20:52:12.621301  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.625193  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1120 20:52:12.625252  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.635123  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1120 20:52:12.635217  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.643402  242858 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1120 20:52:12.643452  242858 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.643499  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.643799  242858 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1120 20:52:12.643834  242858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.643895  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.646580  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1120 20:52:12.646643  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.650546  242858 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1120 20:52:12.650587  242858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.650636  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.658434  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1120 20:52:12.658498  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1120 20:52:12.661446  242858 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1120 20:52:12.661506  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.661517  242858 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.661564  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.661601  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.670723  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1120 20:52:12.670795  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.671031  242858 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1120 20:52:12.671088  242858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.671143  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.695733  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.695759  242858 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1120 20:52:12.695783  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.695786  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.695793  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.695794  242858 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1120 20:52:12.695800  242858 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.695822  242858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.695828  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.695835  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.695852  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.727921  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.727991  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.728037  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.728124  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.728128  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.728361  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.761629  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1120 20:52:12.761727  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:12.761733  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.761736  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.761970  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 20:52:12.762044  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:12.788758  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.788878  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.788907  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.791281  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 20:52:12.791321  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 20:52:12.791417  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 20:52:12.791441  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:12.791441  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1120 20:52:12.791314  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1120 20:52:12.791289  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.836623  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 20:52:12.836637  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 20:52:12.836653  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.836663  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1120 20:52:12.836625  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.836749  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:12.840681  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 20:52:12.840774  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:12.997385  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 20:52:12.997425  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1120 20:52:12.997436  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1120 20:52:12.997491  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 20:52:12.997522  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:12.997527  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 20:52:12.997544  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1120 20:52:12.997585  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:13.032735  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 20:52:13.032771  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1120 20:52:13.032744  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 20:52:13.032810  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1120 20:52:13.106714  242858 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:13.106787  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:10.282432  238148 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:52:10.288612  238148 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1120 20:52:10.288635  238148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:52:10.325924  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:52:11.009949  238148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:52:11.010040  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:11.010050  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-715005 minikube.k8s.io/updated_at=2025_11_20T20_52_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=old-k8s-version-715005 minikube.k8s.io/primary=true
	I1120 20:52:11.019529  238148 ops.go:34] apiserver oom_adj: -16
	I1120 20:52:11.082068  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:11.582258  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.082538  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.582696  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:13.082475  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:13.582190  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:14.082300  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:14.582719  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.588104  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:12.588162  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:13.257481  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1120 20:52:13.257518  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:13.257567  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:13.484484  242858 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1120 20:52:13.484551  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:14.260286  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.002694769s)
	I1120 20:52:14.260311  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 20:52:14.260327  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:14.260388  242858 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1120 20:52:14.260440  242858 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:14.260484  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:14.260397  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:14.264886  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:15.172700  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1120 20:52:15.172730  242858 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:15.172775  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:15.172846  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:16.416639  242858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.243755408s)
	I1120 20:52:16.416707  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:16.416710  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.243913458s)
	I1120 20:52:16.416738  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 20:52:16.416763  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:16.416796  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:17.473434  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.056609494s)
	I1120 20:52:17.473462  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1120 20:52:17.473496  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:17.473499  242858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.056765018s)
	I1120 20:52:17.473547  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1120 20:52:17.473572  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:17.473641  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:17.477792  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1120 20:52:17.477828  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1120 20:52:15.082709  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:15.582847  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:16.082616  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:16.582607  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.082814  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.582328  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:18.082295  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:18.582141  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:19.082885  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:19.582520  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.589485  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:17.589528  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:20.082310  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:20.583143  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:21.082776  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:21.583013  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.082458  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.582097  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.687347  238148 kubeadm.go:1114] duration metric: took 11.677365094s to wait for elevateKubeSystemPrivileges
	I1120 20:52:22.687400  238148 kubeadm.go:403] duration metric: took 21.724823766s to StartCluster
	I1120 20:52:22.687423  238148 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:22.687501  238148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:22.689408  238148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:22.689743  238148 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:22.689876  238148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:52:22.690153  238148 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:52:22.690208  238148 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:22.690292  238148 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-715005"
	I1120 20:52:22.690318  238148 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-715005"
	I1120 20:52:22.690347  238148 host.go:66] Checking if "old-k8s-version-715005" exists ...
	I1120 20:52:22.690337  238148 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-715005"
	I1120 20:52:22.690418  238148 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-715005"
	I1120 20:52:22.690925  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.691054  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.691183  238148 out.go:179] * Verifying Kubernetes components...
	I1120 20:52:22.694668  238148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:22.721925  238148 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-715005"
	I1120 20:52:22.722060  238148 host.go:66] Checking if "old-k8s-version-715005" exists ...
	I1120 20:52:22.722655  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.723628  238148 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:18.613862  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.140264773s)
	I1120 20:52:18.613889  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1120 20:52:18.613927  242858 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:18.613983  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:21.193703  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.579693858s)
	I1120 20:52:21.193738  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1120 20:52:21.193773  242858 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:21.193840  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:21.574262  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1120 20:52:21.574305  242858 cache_images.go:125] Successfully loaded all cached images
	I1120 20:52:21.574312  242858 cache_images.go:94] duration metric: took 9.084699265s to LoadCachedImages
	I1120 20:52:21.574329  242858 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1120 20:52:21.574471  242858 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:52:21.574537  242858 ssh_runner.go:195] Run: sudo crictl info
	I1120 20:52:21.603319  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:21.603343  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:21.603377  242858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:52:21.603411  242858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480337 NodeName:no-preload-480337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:52:21.603588  242858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-480337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:52:21.603664  242858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:21.612556  242858 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1120 20:52:21.612621  242858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:21.620971  242858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1120 20:52:21.621063  242858 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1120 20:52:21.621075  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1120 20:52:21.621091  242858 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1120 20:52:21.625310  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1120 20:52:21.625358  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1120 20:52:22.261818  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:52:22.276106  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1120 20:52:22.280357  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1120 20:52:22.280403  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1120 20:52:22.550268  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1120 20:52:22.554488  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1120 20:52:22.554526  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1120 20:52:22.811251  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:52:22.821552  242858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 20:52:22.839319  242858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:52:22.860159  242858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1120 20:52:22.875158  242858 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:52:22.880468  242858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:22.892551  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:23.010633  242858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:23.045182  242858 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337 for IP: 192.168.76.2
	I1120 20:52:23.045210  242858 certs.go:195] generating shared ca certs ...
	I1120 20:52:23.045229  242858 certs.go:227] acquiring lock for ca certs: {Name:mk775617087d2732283088aad08819408765453b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.045401  242858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key
	I1120 20:52:23.045458  242858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key
	I1120 20:52:23.045474  242858 certs.go:257] generating profile certs ...
	I1120 20:52:23.045550  242858 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key
	I1120 20:52:23.045576  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt with IP's: []
	I1120 20:52:22.724959  238148 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:22.725017  238148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:52:22.725109  238148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-715005
	I1120 20:52:22.757195  238148 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:22.757229  238148 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:52:22.757287  238148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-715005
	I1120 20:52:22.766165  238148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/old-k8s-version-715005/id_rsa Username:docker}
	I1120 20:52:22.790360  238148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/old-k8s-version-715005/id_rsa Username:docker}
	I1120 20:52:22.826000  238148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:52:22.866259  238148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:22.884982  238148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:22.912703  238148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:23.113529  238148 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 20:52:23.115312  238148 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-715005" to be "Ready" ...
	I1120 20:52:23.319716  238148 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 20:52:23.320952  238148 addons.go:515] duration metric: took 630.739345ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 20:52:23.618249  238148 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-715005" context rescaled to 1 replicas
	I1120 20:52:22.590569  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:22.590637  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:23.272593  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt ...
	I1120 20:52:23.272629  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: {Name:mk7a84bdb8ce4d387a03a977e465f46901b9ecca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.272826  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key ...
	I1120 20:52:23.272846  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key: {Name:mk85515619d0d5f42ade705dd7b83fa5c49d94e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.272962  242858 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87
	I1120 20:52:23.272987  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 20:52:23.487592  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 ...
	I1120 20:52:23.487619  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87: {Name:mkfd30b6222da006020eb33948c0ef334b323426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.487776  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87 ...
	I1120 20:52:23.487790  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87: {Name:mk44a5d802018397fb26ee24c50c7deaa57ff0c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.487872  242858 certs.go:382] copying /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 -> /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt
	I1120 20:52:23.487948  242858 certs.go:386] copying /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87 -> /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key
	I1120 20:52:23.488013  242858 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key
	I1120 20:52:23.488033  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt with IP's: []
	I1120 20:52:23.785632  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt ...
	I1120 20:52:23.785658  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt: {Name:mk446faa52377df58cd5afc43090ee71e8db7eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.785816  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key ...
	I1120 20:52:23.785832  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key: {Name:mk31b320f6c39c68d8ce39cc9567e7b46fda7feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.786011  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem (1338 bytes)
	W1120 20:52:23.786063  242858 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731_empty.pem, impossibly tiny 0 bytes
	I1120 20:52:23.786073  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:52:23.786111  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:52:23.786134  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:52:23.786160  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem (1679 bytes)
	I1120 20:52:23.786198  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:52:23.786744  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:52:23.806909  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:52:23.825261  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:52:23.843574  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:52:23.862110  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 20:52:23.880708  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:52:23.899197  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:52:23.917532  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 20:52:23.939072  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /usr/share/ca-certificates/77312.pem (1708 bytes)
	I1120 20:52:24.063298  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:52:24.128578  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem --> /usr/share/ca-certificates/7731.pem (1338 bytes)
	I1120 20:52:24.151285  242858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:52:24.164998  242858 ssh_runner.go:195] Run: openssl version
	I1120 20:52:24.171390  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.179204  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77312.pem /etc/ssl/certs/77312.pem
	I1120 20:52:24.187192  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.191320  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:26 /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.191430  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.227765  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:24.236942  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/77312.pem /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:24.245416  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.253726  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:52:24.261768  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.265899  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.265952  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.302264  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:52:24.310398  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:52:24.318255  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.326249  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7731.pem /etc/ssl/certs/7731.pem
	I1120 20:52:24.334629  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.338657  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:26 /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.338728  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.376799  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:52:24.385154  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7731.pem /etc/ssl/certs/51391683.0
	I1120 20:52:24.393234  242858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:52:24.397049  242858 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:52:24.397114  242858 kubeadm.go:401] StartCluster: {Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:24.397194  242858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 20:52:24.397267  242858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:52:24.428423  242858 cri.go:89] found id: ""
	I1120 20:52:24.428487  242858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:52:24.438710  242858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:52:24.449299  242858 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 20:52:24.449375  242858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:52:24.459536  242858 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:52:24.459556  242858 kubeadm.go:158] found existing configuration files:
	
	I1120 20:52:24.459604  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:52:24.468144  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:52:24.468200  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:52:24.476815  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:52:24.486242  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:52:24.486304  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:52:24.496106  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:52:24.505724  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:52:24.505782  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:52:24.514672  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:52:24.524027  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:52:24.524092  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:52:24.533975  242858 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 20:52:24.619419  242858 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:52:24.680410  242858 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1120 20:52:25.119561  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	W1120 20:52:27.643468  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:27.593264  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:27.593340  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:27.750894  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:35434->192.168.103.2:8443: read: connection reset by peer
	I1120 20:52:28.086453  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:28.086851  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:28.586485  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:28.586954  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:29.086442  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:29.086851  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:29.587389  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:29.587828  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1120 20:52:30.118581  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	W1120 20:52:32.618929  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:30.086483  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:30.086908  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:30.586449  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:30.586956  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:31.086463  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:31.086895  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:31.586541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:31.586971  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:32.086571  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:32.087046  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:32.586544  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:32.586991  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:33.086432  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:33.086879  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:33.586450  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:33.586927  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:34.086541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:34.086925  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:34.586458  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:34.586899  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:35.816708  242858 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:52:35.816800  242858 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:52:35.816948  242858 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 20:52:35.817027  242858 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 20:52:35.817097  242858 kubeadm.go:319] OS: Linux
	I1120 20:52:35.817148  242858 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 20:52:35.817194  242858 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 20:52:35.817243  242858 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 20:52:35.817320  242858 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 20:52:35.817383  242858 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 20:52:35.817442  242858 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 20:52:35.817491  242858 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 20:52:35.817568  242858 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 20:52:35.817637  242858 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:52:35.817722  242858 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:52:35.817809  242858 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:52:35.817887  242858 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:52:35.819194  242858 out.go:252]   - Generating certificates and keys ...
	I1120 20:52:35.819273  242858 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:52:35.819349  242858 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:52:35.819472  242858 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:52:35.819552  242858 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:52:35.819646  242858 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:52:35.819695  242858 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:52:35.819746  242858 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:52:35.819854  242858 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-480337] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 20:52:35.819903  242858 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:52:35.820019  242858 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-480337] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 20:52:35.820080  242858 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:52:35.820140  242858 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:52:35.820196  242858 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:52:35.820273  242858 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:52:35.820348  242858 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:52:35.820452  242858 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:52:35.820533  242858 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:52:35.820630  242858 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:52:35.820707  242858 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:52:35.820799  242858 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:52:35.820874  242858 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:52:35.822148  242858 out.go:252]   - Booting up control plane ...
	I1120 20:52:35.822225  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:52:35.822291  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:52:35.822359  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:52:35.822472  242858 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:52:35.822579  242858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:52:35.822729  242858 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:52:35.822820  242858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:52:35.822877  242858 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:52:35.823038  242858 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:52:35.823159  242858 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:52:35.823248  242858 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000889914s
	I1120 20:52:35.823363  242858 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:52:35.823499  242858 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 20:52:35.823658  242858 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:52:35.823780  242858 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:52:35.823893  242858 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.505586489s
	I1120 20:52:35.823986  242858 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.124627255s
	I1120 20:52:35.824101  242858 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00208951s
	I1120 20:52:35.824256  242858 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:52:35.824442  242858 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:52:35.824544  242858 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:52:35.824787  242858 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-480337 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:52:35.824867  242858 kubeadm.go:319] [bootstrap-token] Using token: kimko6.d8ifdar0sarfgkue
	I1120 20:52:35.826341  242858 out.go:252]   - Configuring RBAC rules ...
	I1120 20:52:35.826458  242858 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:52:35.826533  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:52:35.826671  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:52:35.826791  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:52:35.826891  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:52:35.826964  242858 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:52:35.827060  242858 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:52:35.827108  242858 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:52:35.827153  242858 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:52:35.827163  242858 kubeadm.go:319] 
	I1120 20:52:35.827221  242858 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:52:35.827227  242858 kubeadm.go:319] 
	I1120 20:52:35.827301  242858 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:52:35.827308  242858 kubeadm.go:319] 
	I1120 20:52:35.827329  242858 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:52:35.827400  242858 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:52:35.827444  242858 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:52:35.827450  242858 kubeadm.go:319] 
	I1120 20:52:35.827494  242858 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:52:35.827500  242858 kubeadm.go:319] 
	I1120 20:52:35.827540  242858 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:52:35.827545  242858 kubeadm.go:319] 
	I1120 20:52:35.827590  242858 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:52:35.827671  242858 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:52:35.827770  242858 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:52:35.827778  242858 kubeadm.go:319] 
	I1120 20:52:35.827890  242858 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:52:35.827978  242858 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:52:35.827985  242858 kubeadm.go:319] 
	I1120 20:52:35.828060  242858 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kimko6.d8ifdar0sarfgkue \
	I1120 20:52:35.828152  242858 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f \
	I1120 20:52:35.828172  242858 kubeadm.go:319] 	--control-plane 
	I1120 20:52:35.828177  242858 kubeadm.go:319] 
	I1120 20:52:35.828263  242858 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:52:35.828270  242858 kubeadm.go:319] 
	I1120 20:52:35.828355  242858 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kimko6.d8ifdar0sarfgkue \
	I1120 20:52:35.828521  242858 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f 
	I1120 20:52:35.828535  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:35.828540  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:35.829886  242858 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:52:35.831052  242858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:52:35.835504  242858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 20:52:35.835522  242858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:52:35.848645  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:52:36.063646  242858 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:52:36.063726  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:36.063745  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-480337 minikube.k8s.io/updated_at=2025_11_20T20_52_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-480337 minikube.k8s.io/primary=true
	I1120 20:52:36.154515  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:36.154568  242858 ops.go:34] apiserver oom_adj: -16
	I1120 20:52:36.655018  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:37.154622  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:37.654645  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:38.155458  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 20:52:35.118798  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:36.119441  238148 node_ready.go:49] node "old-k8s-version-715005" is "Ready"
	I1120 20:52:36.119474  238148 node_ready.go:38] duration metric: took 13.004118914s for node "old-k8s-version-715005" to be "Ready" ...
	I1120 20:52:36.119492  238148 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:52:36.119550  238148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:52:36.136261  238148 api_server.go:72] duration metric: took 13.446468406s to wait for apiserver process to appear ...
	I1120 20:52:36.136286  238148 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:52:36.136303  238148 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 20:52:36.142400  238148 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 20:52:36.143959  238148 api_server.go:141] control plane version: v1.28.0
	I1120 20:52:36.143990  238148 api_server.go:131] duration metric: took 7.697032ms to wait for apiserver health ...
	I1120 20:52:36.144000  238148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:52:36.148683  238148 system_pods.go:59] 8 kube-system pods found
	I1120 20:52:36.148739  238148 system_pods.go:61] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.148757  238148 system_pods.go:61] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.148764  238148 system_pods.go:61] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.148769  238148 system_pods.go:61] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.148783  238148 system_pods.go:61] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.148787  238148 system_pods.go:61] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.148793  238148 system_pods.go:61] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.148814  238148 system_pods.go:61] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:52:36.148824  238148 system_pods.go:74] duration metric: took 4.816269ms to wait for pod list to return data ...
	I1120 20:52:36.148839  238148 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:52:36.151480  238148 default_sa.go:45] found service account: "default"
	I1120 20:52:36.151502  238148 default_sa.go:55] duration metric: took 2.6562ms for default service account to be created ...
	I1120 20:52:36.151511  238148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:52:36.155002  238148 system_pods.go:86] 8 kube-system pods found
	I1120 20:52:36.155030  238148 system_pods.go:89] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.155037  238148 system_pods.go:89] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.155044  238148 system_pods.go:89] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.155050  238148 system_pods.go:89] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.155055  238148 system_pods.go:89] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.155059  238148 system_pods.go:89] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.155070  238148 system_pods.go:89] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.155077  238148 system_pods.go:89] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:52:36.155118  238148 retry.go:31] will retry after 266.136492ms: missing components: kube-dns
	I1120 20:52:36.426029  238148 system_pods.go:86] 8 kube-system pods found
	I1120 20:52:36.426097  238148 system_pods.go:89] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.426109  238148 system_pods.go:89] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.426121  238148 system_pods.go:89] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.426128  238148 system_pods.go:89] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.426137  238148 system_pods.go:89] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.426146  238148 system_pods.go:89] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.426151  238148 system_pods.go:89] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.426156  238148 system_pods.go:89] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Running
	I1120 20:52:36.426166  238148 system_pods.go:126] duration metric: took 274.648335ms to wait for k8s-apps to be running ...
	I1120 20:52:36.426174  238148 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:52:36.426226  238148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:52:36.440582  238148 system_svc.go:56] duration metric: took 14.395654ms WaitForService to wait for kubelet
	I1120 20:52:36.440618  238148 kubeadm.go:587] duration metric: took 13.750832492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:36.440642  238148 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:52:36.443487  238148 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:52:36.443516  238148 node_conditions.go:123] node cpu capacity is 8
	I1120 20:52:36.443534  238148 node_conditions.go:105] duration metric: took 2.886705ms to run NodePressure ...
	I1120 20:52:36.443549  238148 start.go:242] waiting for startup goroutines ...
	I1120 20:52:36.443558  238148 start.go:247] waiting for cluster config update ...
	I1120 20:52:36.443570  238148 start.go:256] writing updated cluster config ...
	I1120 20:52:36.443910  238148 ssh_runner.go:195] Run: rm -f paused
	I1120 20:52:36.447835  238148 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:52:36.451653  238148 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mptgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.457441  238148 pod_ready.go:94] pod "coredns-5dd5756b68-mptgs" is "Ready"
	I1120 20:52:37.457465  238148 pod_ready.go:86] duration metric: took 1.005790774s for pod "coredns-5dd5756b68-mptgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.460607  238148 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.464240  238148 pod_ready.go:94] pod "etcd-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.464258  238148 pod_ready.go:86] duration metric: took 3.632649ms for pod "etcd-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.466560  238148 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.469867  238148 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.469885  238148 pod_ready.go:86] duration metric: took 3.300833ms for pod "kube-apiserver-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.472108  238148 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.655747  238148 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.655772  238148 pod_ready.go:86] duration metric: took 183.642109ms for pod "kube-controller-manager-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.856083  238148 pod_ready.go:83] waiting for pod "kube-proxy-4pnqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.255430  238148 pod_ready.go:94] pod "kube-proxy-4pnqq" is "Ready"
	I1120 20:52:38.255490  238148 pod_ready.go:86] duration metric: took 399.383229ms for pod "kube-proxy-4pnqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.456007  238148 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.855855  238148 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-715005" is "Ready"
	I1120 20:52:38.855880  238148 pod_ready.go:86] duration metric: took 399.852833ms for pod "kube-scheduler-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.855890  238148 pod_ready.go:40] duration metric: took 2.408021676s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:52:38.898974  238148 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1120 20:52:38.900810  238148 out.go:203] 
	W1120 20:52:38.902141  238148 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 20:52:38.903278  238148 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 20:52:38.904757  238148 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-715005" cluster and "default" namespace by default
	I1120 20:52:35.086860  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:35.087261  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:35.587416  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:35.587799  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:36.087455  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:36.087855  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:36.586439  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:36.586878  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:37.086442  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:37.086847  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:37.587357  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:37.587842  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.086405  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:38.086807  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.586903  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:38.587307  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:39.086541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:39.086974  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:39.586441  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:39.586902  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.655596  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:39.155616  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:39.655601  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:40.154636  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:40.220222  242858 kubeadm.go:1114] duration metric: took 4.156549094s to wait for elevateKubeSystemPrivileges
	I1120 20:52:40.220261  242858 kubeadm.go:403] duration metric: took 15.823151044s to StartCluster
	I1120 20:52:40.220283  242858 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:40.220356  242858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:40.221736  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:40.221992  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:52:40.222016  242858 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:40.221988  242858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:40.222107  242858 addons.go:70] Setting storage-provisioner=true in profile "no-preload-480337"
	I1120 20:52:40.222123  242858 addons.go:239] Setting addon storage-provisioner=true in "no-preload-480337"
	I1120 20:52:40.222150  242858 host.go:66] Checking if "no-preload-480337" exists ...
	I1120 20:52:40.222183  242858 addons.go:70] Setting default-storageclass=true in profile "no-preload-480337"
	I1120 20:52:40.222205  242858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480337"
	I1120 20:52:40.222208  242858 config.go:182] Loaded profile config "no-preload-480337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:40.222552  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.222707  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.223621  242858 out.go:179] * Verifying Kubernetes components...
	I1120 20:52:40.224838  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:40.245867  242858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:40.246538  242858 addons.go:239] Setting addon default-storageclass=true in "no-preload-480337"
	I1120 20:52:40.246583  242858 host.go:66] Checking if "no-preload-480337" exists ...
	I1120 20:52:40.246851  242858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:40.246867  242858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:52:40.246921  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:40.247059  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.280143  242858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:40.280169  242858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:52:40.280238  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:40.282336  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:40.308080  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:40.319537  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:52:40.366219  242858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:40.400839  242858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:40.419278  242858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:40.488010  242858 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 20:52:40.489308  242858 node_ready.go:35] waiting up to 6m0s for node "no-preload-480337" to be "Ready" ...
	I1120 20:52:40.705813  242858 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 20:52:40.707763  242858 addons.go:515] duration metric: took 485.74699ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 20:52:40.992476  242858 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-480337" context rescaled to 1 replicas
	W1120 20:52:42.491841  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	I1120 20:52:40.086449  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:40.086895  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:40.586439  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:40.586951  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:41.087144  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:41.087603  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:41.587035  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:41.587526  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:42.087212  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:42.087656  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:42.587397  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:42.587795  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:43.086420  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:43.086825  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:43.586409  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:43.586769  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:44.087148  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:44.087553  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:44.587022  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:44.587465  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1120 20:52:44.493002  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	W1120 20:52:46.991763  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	cff0ed278d62d       56cc512116c8f       8 seconds ago       Running             busybox                   0                   71df75dca1dd7       busybox                                          default
	b238eb5506919       ead0a4a53df89       12 seconds ago      Running             coredns                   0                   3b8b46903b404       coredns-5dd5756b68-mptgs                         kube-system
	30d0f5bdd9f8b       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   dad0efe137051       storage-provisioner                              kube-system
	31461353b2022       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   0e6a4046c3e58       kindnet-cfz75                                    kube-system
	cd89c06d39abe       ea1030da44aa1       25 seconds ago      Running             kube-proxy                0                   d1996aaa95795       kube-proxy-4pnqq                                 kube-system
	129c92b2baf2f       4be79c38a4bab       43 seconds ago      Running             kube-controller-manager   0                   37e7b3b214e2f       kube-controller-manager-old-k8s-version-715005   kube-system
	51b9d78e7f6a4       f6f496300a2ae       43 seconds ago      Running             kube-scheduler            0                   659db2c56a9ce       kube-scheduler-old-k8s-version-715005            kube-system
	9b9c74b02fcb4       bb5e0dde9054c       43 seconds ago      Running             kube-apiserver            0                   046251feeeba0       kube-apiserver-old-k8s-version-715005            kube-system
	bd79d6bd69267       73deb9a3f7025       43 seconds ago      Running             etcd                      0                   0ccf99ccdb55a       etcd-old-k8s-version-715005                      kube-system
	
	
	==> containerd <==
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.331054404Z" level=info msg="CreateContainer within sandbox \"dad0efe1370515d6a5e283f690b5861af819ca7c438225b3992c0fcc85ae50b6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.331514371Z" level=info msg="StartContainer for \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.332524517Z" level=info msg="connecting to shim 30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5" address="unix:///run/containerd/s/f85382523371363a580faab823f4564ef702cb91dd77ece3725ccb1af7d38b25" protocol=ttrpc version=3
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.334597791Z" level=info msg="CreateContainer within sandbox \"3b8b46903b404473bef4a273a4ab27ff906ec052ea45e4b4212bd43b455cdbd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.335057955Z" level=info msg="StartContainer for \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.335820086Z" level=info msg="connecting to shim b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212" address="unix:///run/containerd/s/801daa28e941e0441ab99e0b93ec314b977136497b29ce8e8c5cb393ef1573e3" protocol=ttrpc version=3
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.383725768Z" level=info msg="StartContainer for \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\" returns successfully"
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.384073634Z" level=info msg="StartContainer for \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\" returns successfully"
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.364312535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a1d0e8f-ce19-4ac1-bea8-96d6e879131e,Namespace:default,Attempt:0,}"
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.408963097Z" level=info msg="connecting to shim 71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08" address="unix:///run/containerd/s/ada5c2f6fb8ba3beb99f5d6ca5c34f6ee268100be418787584e6f9aad68bf647" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.484006268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a1d0e8f-ce19-4ac1-bea8-96d6e879131e,Namespace:default,Attempt:0,} returns sandbox id \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\""
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.485812554Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.847165187Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.847932562Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.849388484Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.850951671Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.851443868Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.365589938s"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.851488669Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.853206452Z" level=info msg="CreateContainer within sandbox \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.860224048Z" level=info msg="Container cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.865791220Z" level=info msg="CreateContainer within sandbox \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.866334070Z" level=info msg="StartContainer for \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.867211058Z" level=info msg="connecting to shim cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a" address="unix:///run/containerd/s/ada5c2f6fb8ba3beb99f5d6ca5c34f6ee268100be418787584e6f9aad68bf647" protocol=ttrpc version=3
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.920084522Z" level=info msg="StartContainer for \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\" returns successfully"
	Nov 20 20:52:48 old-k8s-version-715005 containerd[661]: E1120 20:52:48.135214     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44202 - 26657 "HINFO IN 3488307865202641534.2109671425240872498. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.449966311s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-715005
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-715005
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-715005
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_52_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:52:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-715005
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-715005
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                81d39874-f554-4f8e-9c90-bef57a66d9b2
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-mptgs                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-715005                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-cfz75                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-715005             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-715005    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-4pnqq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-715005             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  39s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-715005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-715005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-715005 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-715005 event: Registered Node old-k8s-version-715005 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-715005 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bd79d6bd6926714eb9fe7608d919a6bea130b15fb4cba41cc3d774f5a9ab2a7e] <==
	{"level":"info","ts":"2025-11-20T20:52:05.229987Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T20:52:05.230074Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:52:05.230114Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:52:05.918054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.919055Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.919714Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-715005 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:52:05.919751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T20:52:05.919774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T20:52:05.920218Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T20:52:05.920249Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T20:52:05.920456Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.920676Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.920973Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.921153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T20:52:05.923247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T20:52:28.821816Z","caller":"traceutil/trace.go:171","msg":"trace[1388972908] linearizableReadLoop","detail":"{readStateIndex:393; appliedIndex:392; }","duration":"204.166324ms","start":"2025-11-20T20:52:28.617625Z","end":"2025-11-20T20:52:28.821792Z","steps":["trace[1388972908] 'read index received'  (duration: 127.067096ms)","trace[1388972908] 'applied index is now lower than readState.Index'  (duration: 77.098386ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:52:28.821848Z","caller":"traceutil/trace.go:171","msg":"trace[50526453] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"205.569936ms","start":"2025-11-20T20:52:28.616251Z","end":"2025-11-20T20:52:28.821821Z","steps":["trace[50526453] 'process raft request'  (duration: 128.495071ms)","trace[50526453] 'compare'  (duration: 76.913082ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:52:28.822044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.39432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-715005\" ","response":"range_response_count:1 size:4738"}
	{"level":"info","ts":"2025-11-20T20:52:28.822092Z","caller":"traceutil/trace.go:171","msg":"trace[1042929095] range","detail":"{range_begin:/registry/minions/old-k8s-version-715005; range_end:; response_count:1; response_revision:378; }","duration":"204.491416ms","start":"2025-11-20T20:52:28.617589Z","end":"2025-11-20T20:52:28.822081Z","steps":["trace[1042929095] 'agreement among raft nodes before linearized reading'  (duration: 204.292562ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:52:49 up 35 min,  0 user,  load average: 3.54, 3.02, 1.94
	Linux old-k8s-version-715005 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31461353b202200468aa23f3972e4e462db51e670ff467500d67a4a3bf84828c] <==
	I1120 20:52:25.577955       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:52:25.595537       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 20:52:25.595672       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:52:25.595689       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:52:25.595717       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:52:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:52:25.799345       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:52:25.799395       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:52:25.799411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:52:25.799764       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:52:26.195526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:52:26.195566       1 metrics.go:72] Registering metrics
	I1120 20:52:26.195655       1 controller.go:711] "Syncing nftables rules"
	I1120 20:52:35.807032       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:52:35.807104       1 main.go:301] handling current node
	I1120 20:52:45.799883       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:52:45.799943       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b9c74b02fcb4b147d54a9f31669c3eaf326a38bd4dcd1194a2c0d07d79aaca1] <==
	I1120 20:52:07.075686       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 20:52:07.076243       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 20:52:07.077707       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 20:52:07.078484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 20:52:07.078627       1 aggregator.go:166] initial CRD sync complete...
	I1120 20:52:07.078643       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 20:52:07.078650       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:52:07.078659       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:52:07.113958       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:07.990626       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:52:07.994280       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:52:07.994300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:52:08.407696       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:52:08.442971       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:52:08.587450       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:52:08.593189       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 20:52:08.594181       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 20:52:08.598322       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:09.034088       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 20:52:10.221091       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 20:52:10.238151       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:52:10.251155       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:22.642746       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:22.642831       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:22.798006       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [129c92b2baf2f5d973e359f010839efa78cf975a381962dd3873c5fa1d291869] <==
	I1120 20:52:22.089891       1 shared_informer.go:318] Caches are synced for deployment
	I1120 20:52:22.097778       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 20:52:22.410232       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 20:52:22.486848       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 20:52:22.486886       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 20:52:22.654034       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4pnqq"
	I1120 20:52:22.655412       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cfz75"
	I1120 20:52:22.803882       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 20:52:22.898093       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hnbwt"
	I1120 20:52:22.906310       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mptgs"
	I1120 20:52:22.916699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.923624ms"
	I1120 20:52:22.927435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.677803ms"
	I1120 20:52:22.951258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.764186ms"
	I1120 20:52:22.951454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.492µs"
	I1120 20:52:23.145279       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 20:52:23.157612       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hnbwt"
	I1120 20:52:23.166328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.800363ms"
	I1120 20:52:23.172401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.997974ms"
	I1120 20:52:23.172562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.526µs"
	I1120 20:52:35.907119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.527µs"
	I1120 20:52:35.922303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.211µs"
	I1120 20:52:36.421114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.801µs"
	I1120 20:52:36.836379       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1120 20:52:37.420026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.210833ms"
	I1120 20:52:37.420110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.049µs"
	
	
	==> kube-proxy [cd89c06d39abe013aea89d98c9df900a06c30cb2d739e0a9660b3d6b845006f2] <==
	I1120 20:52:23.316357       1 server_others.go:69] "Using iptables proxy"
	I1120 20:52:23.325940       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 20:52:23.346710       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:23.348974       1 server_others.go:152] "Using iptables Proxier"
	I1120 20:52:23.349015       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 20:52:23.349021       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 20:52:23.349053       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 20:52:23.349270       1 server.go:846] "Version info" version="v1.28.0"
	I1120 20:52:23.349284       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:23.350668       1 config.go:188] "Starting service config controller"
	I1120 20:52:23.350707       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 20:52:23.350735       1 config.go:97] "Starting endpoint slice config controller"
	I1120 20:52:23.350739       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 20:52:23.351421       1 config.go:315] "Starting node config controller"
	I1120 20:52:23.351457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 20:52:23.450834       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 20:52:23.450857       1 shared_informer.go:318] Caches are synced for service config
	I1120 20:52:23.452185       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [51b9d78e7f6a4ddaf97aa93f6a3303b88d8ea9c948782289642216f4875377d6] <==
	W1120 20:52:07.043502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 20:52:07.043581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1120 20:52:07.043601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1120 20:52:07.043728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 20:52:07.043770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 20:52:07.043792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 20:52:07.044129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 20:52:07.044156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1120 20:52:07.044351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 20:52:07.044412       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 20:52:07.881440       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 20:52:07.881475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 20:52:07.903871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 20:52:07.903902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 20:52:07.933445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 20:52:07.933478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 20:52:08.001403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 20:52:08.001449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 20:52:08.038750       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1120 20:52:08.038792       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 20:52:08.103503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 20:52:08.103539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 20:52:08.137088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 20:52:08.137130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1120 20:52:11.239121       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 20:52:21 old-k8s-version-715005 kubelet[1553]: I1120 20:52:21.975542    1553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.664556    1553 topology_manager.go:215] "Topology Admit Handler" podUID="0042d6a2-8643-46e3-902b-f53060fcf7d2" podNamespace="kube-system" podName="kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.665169    1553 topology_manager.go:215] "Topology Admit Handler" podUID="b58b571d-f605-4fd4-8afa-d17455aaaaab" podNamespace="kube-system" podName="kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679738    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b58b571d-f605-4fd4-8afa-d17455aaaaab-kube-proxy\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679779    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-xtables-lock\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679797    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-lib-modules\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679815    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvpc\" (UniqueName: \"kubernetes.io/projected/0042d6a2-8643-46e3-902b-f53060fcf7d2-kube-api-access-sfvpc\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679837    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b58b571d-f605-4fd4-8afa-d17455aaaaab-lib-modules\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679855    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-cni-cfg\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679871    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b58b571d-f605-4fd4-8afa-d17455aaaaab-xtables-lock\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679888    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59kdw\" (UniqueName: \"kubernetes.io/projected/b58b571d-f605-4fd4-8afa-d17455aaaaab-kube-api-access-59kdw\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:23 old-k8s-version-715005 kubelet[1553]: I1120 20:52:23.378345    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4pnqq" podStartSLOduration=1.378305026 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:23.378202335 +0000 UTC m=+13.190629566" watchObservedRunningTime="2025-11-20 20:52:23.378305026 +0000 UTC m=+13.190732254"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.883329    1553 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907064    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cfz75" podStartSLOduration=11.958094777 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="2025-11-20 20:52:23.304498177 +0000 UTC m=+13.116925406" lastFinishedPulling="2025-11-20 20:52:25.253410661 +0000 UTC m=+15.065837882" observedRunningTime="2025-11-20 20:52:26.388782718 +0000 UTC m=+16.201209947" watchObservedRunningTime="2025-11-20 20:52:35.907007253 +0000 UTC m=+25.719434486"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907524    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2c198f77-2da3-4dc0-98f2-5263299ec40b" podNamespace="kube-system" podName="coredns-5dd5756b68-mptgs"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907700    1553 topology_manager.go:215] "Topology Admit Handler" podUID="6af79ed2-0bd8-44f7-a2bb-8e7788cf7111" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082664    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm78s\" (UniqueName: \"kubernetes.io/projected/2c198f77-2da3-4dc0-98f2-5263299ec40b-kube-api-access-xm78s\") pod \"coredns-5dd5756b68-mptgs\" (UID: \"2c198f77-2da3-4dc0-98f2-5263299ec40b\") " pod="kube-system/coredns-5dd5756b68-mptgs"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082744    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78q9l\" (UniqueName: \"kubernetes.io/projected/6af79ed2-0bd8-44f7-a2bb-8e7788cf7111-kube-api-access-78q9l\") pod \"storage-provisioner\" (UID: \"6af79ed2-0bd8-44f7-a2bb-8e7788cf7111\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082779    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c198f77-2da3-4dc0-98f2-5263299ec40b-config-volume\") pod \"coredns-5dd5756b68-mptgs\" (UID: \"2c198f77-2da3-4dc0-98f2-5263299ec40b\") " pod="kube-system/coredns-5dd5756b68-mptgs"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082808    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6af79ed2-0bd8-44f7-a2bb-8e7788cf7111-tmp\") pod \"storage-provisioner\" (UID: \"6af79ed2-0bd8-44f7-a2bb-8e7788cf7111\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.410010    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.409958355 podCreationTimestamp="2025-11-20 20:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.409796502 +0000 UTC m=+26.222223732" watchObservedRunningTime="2025-11-20 20:52:36.409958355 +0000 UTC m=+26.222385586"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.421317    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mptgs" podStartSLOduration=14.42125597 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.421053425 +0000 UTC m=+26.233480850" watchObservedRunningTime="2025-11-20 20:52:36.42125597 +0000 UTC m=+26.233683201"
	Nov 20 20:52:39 old-k8s-version-715005 kubelet[1553]: I1120 20:52:39.055805    1553 topology_manager.go:215] "Topology Admit Handler" podUID="3a1d0e8f-ce19-4ac1-bea8-96d6e879131e" podNamespace="default" podName="busybox"
	Nov 20 20:52:39 old-k8s-version-715005 kubelet[1553]: I1120 20:52:39.201304    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkb2\" (UniqueName: \"kubernetes.io/projected/3a1d0e8f-ce19-4ac1-bea8-96d6e879131e-kube-api-access-djkb2\") pod \"busybox\" (UID: \"3a1d0e8f-ce19-4ac1-bea8-96d6e879131e\") " pod="default/busybox"
	Nov 20 20:52:41 old-k8s-version-715005 kubelet[1553]: I1120 20:52:41.425214    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.058833545 podCreationTimestamp="2025-11-20 20:52:39 +0000 UTC" firstStartedPulling="2025-11-20 20:52:39.485462465 +0000 UTC m=+29.297889678" lastFinishedPulling="2025-11-20 20:52:40.851802022 +0000 UTC m=+30.664229235" observedRunningTime="2025-11-20 20:52:41.424674281 +0000 UTC m=+31.237101511" watchObservedRunningTime="2025-11-20 20:52:41.425173102 +0000 UTC m=+31.237600331"
	
	
	==> storage-provisioner [30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5] <==
	I1120 20:52:36.391585       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:52:36.400469       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:52:36.400520       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 20:52:36.407549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:52:36.407928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e5947a7-1f12-4fc5-bee8-e5a8d2f00419", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92 became leader
	I1120 20:52:36.407957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92!
	I1120 20:52:36.509166       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-715005 -n old-k8s-version-715005
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-715005 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-715005
helpers_test.go:243: (dbg) docker inspect old-k8s-version-715005:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010",
	        "Created": "2025-11-20T20:51:55.667724791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239626,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:51:55.707980557Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/hosts",
	        "LogPath": "/var/lib/docker/containers/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010/3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010-json.log",
	        "Name": "/old-k8s-version-715005",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-715005:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-715005",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b6a20512ce5e237d8ca49b91b2f96a096d390da4cf92a8def071dc90f221010",
	                "LowerDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85c7463c1a3a740713826ae627000420fe9eccd7da649211f57286f33afebd5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-715005",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-715005/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-715005",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-715005",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-715005",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "519c5740cd96610162543e0478f357a1c40858a76bf4bd954d93058851e4b011",
	            "SandboxKey": "/var/run/docker/netns/519c5740cd96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-715005": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "369c4674dc51430afa443de03112fdde075c05b6373a2c857451d35a88c6b5e1",
	                    "EndpointID": "19dc54360de78ea08cefba6f708fa345c1a326c6b8006456f8533a57f821b980",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:06:c9:9d:66:1e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-715005",
	                        "3b6a20512ce5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-715005 -n old-k8s-version-715005
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-715005 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-876657 sudo crio config                                                                                                                                                                                                                   │ cilium-876657             │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │                     │
	│ delete  │ -p cilium-876657                                                                                                                                                                                                                                    │ cilium-876657             │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-137718    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p stopped-upgrade-058944                                                                                                                                                                                                                           │ stopped-upgrade-058944    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p missing-upgrade-670521                                                                                                                                                                                                                           │ missing-upgrade-670521    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ stop    │ -p kubernetes-upgrade-902531                                                                                                                                                                                                                        │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ force-systemd-flag-431737 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p force-systemd-flag-431737                                                                                                                                                                                                                        │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ cert-options-636195 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ -p cert-options-636195 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ delete  │ -p cert-options-636195                                                                                                                                                                                                                              │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337         │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:52:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:52:08.252448  242858 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:52:08.252562  242858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:08.252570  242858 out.go:374] Setting ErrFile to fd 2...
	I1120 20:52:08.252576  242858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:08.252753  242858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:52:08.253282  242858 out.go:368] Setting JSON to false
	I1120 20:52:08.254779  242858 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2080,"bootTime":1763669848,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:52:08.254847  242858 start.go:143] virtualization: kvm guest
	I1120 20:52:08.256503  242858 out.go:179] * [no-preload-480337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:52:08.258025  242858 notify.go:221] Checking for updates...
	I1120 20:52:08.258048  242858 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:52:08.260128  242858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:52:08.261508  242858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:08.262712  242858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:52:08.263964  242858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:52:08.265480  242858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:52:08.267315  242858 config.go:182] Loaded profile config "cert-expiration-137718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:08.267441  242858 config.go:182] Loaded profile config "kubernetes-upgrade-902531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:08.267541  242858 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:52:08.267634  242858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:52:08.298259  242858 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:52:08.298399  242858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:08.367035  242858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-20 20:52:08.353260141 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:08.367188  242858 docker.go:319] overlay module found
	I1120 20:52:08.368888  242858 out.go:179] * Using the docker driver based on user configuration
	I1120 20:52:08.370134  242858 start.go:309] selected driver: docker
	I1120 20:52:08.370149  242858 start.go:930] validating driver "docker" against <nil>
	I1120 20:52:08.370160  242858 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:52:08.370935  242858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:08.436760  242858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-20 20:52:08.425987757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:08.436947  242858 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:52:08.437244  242858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:08.438740  242858 out.go:179] * Using Docker driver with root privileges
	I1120 20:52:08.439836  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:08.439894  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:08.439908  242858 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:52:08.439975  242858 start.go:353] cluster config:
	{Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:08.441167  242858 out.go:179] * Starting "no-preload-480337" primary control-plane node in "no-preload-480337" cluster
	I1120 20:52:08.442267  242858 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:52:08.443897  242858 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:08.445359  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:52:08.445439  242858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:08.445494  242858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json ...
	I1120 20:52:08.445524  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json: {Name:mk67fe584bdd61e7dc470a4845c1a48d09ae85c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:08.445659  242858 cache.go:107] acquiring lock: {Name:mk3ea08bf43a5d2bac31f44c4411f5077815f926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445701  242858 cache.go:107] acquiring lock: {Name:mk452f143f3760942acee0a1afa340e79fb15acb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445754  242858 cache.go:115] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 20:52:08.445726  242858 cache.go:107] acquiring lock: {Name:mk32e408a68e033995572d30bae912b78d78fdd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445767  242858 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.56µs
	I1120 20:52:08.445785  242858 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 20:52:08.445769  242858 cache.go:107] acquiring lock: {Name:mk8589edbfed330a1ddb51d34e55cf4f6dba2585 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445801  242858 cache.go:107] acquiring lock: {Name:mka8113f9113c7cf8c73b708b8e0c0e4338b0522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445815  242858 cache.go:107] acquiring lock: {Name:mkf92c975c475c307d4c631b384e242552425a97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445804  242858 cache.go:107] acquiring lock: {Name:mkace5f2fd3da1bb55f21aaae93deda29f684d06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445847  242858 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:08.445836  242858 cache.go:107] acquiring lock: {Name:mk1cd73325d398de1f9fcd7c35b741773c7770b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.445897  242858 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:08.445912  242858 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:08.446005  242858 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:08.446020  242858 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:08.446023  242858 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:08.446081  242858 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 20:52:08.447286  242858 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:08.447293  242858 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:08.447287  242858 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 20:52:08.447406  242858 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:08.447417  242858 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:08.447444  242858 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:08.447421  242858 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:08.470469  242858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:08.470487  242858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:08.470502  242858 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:08.470529  242858 start.go:360] acquireMachinesLock for no-preload-480337: {Name:mk38ae0cd7f919fa42a7cfea565c7e28ffc15120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:08.470636  242858 start.go:364] duration metric: took 88.862µs to acquireMachinesLock for "no-preload-480337"
	I1120 20:52:08.470665  242858 start.go:93] Provisioning new machine with config: &{Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:08.470767  242858 start.go:125] createHost starting for "" (driver="docker")
	I1120 20:52:08.750662  238148 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502666 seconds
	I1120 20:52:08.750868  238148 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:52:08.765060  238148 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:52:09.291266  238148 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:52:09.291589  238148 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-715005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:52:09.814822  238148 kubeadm.go:319] [bootstrap-token] Using token: 16hbch.nehrzw8ak789mtyt
	I1120 20:52:06.090049  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:06.090086  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:06.535613  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:48474->192.168.103.2:8443: read: connection reset by peer
	I1120 20:52:06.586879  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:06.587349  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:07.087076  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:07.087656  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:07.587315  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:09.816352  238148 out.go:252]   - Configuring RBAC rules ...
	I1120 20:52:09.816506  238148 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:52:09.826772  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:52:09.836932  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:52:09.849779  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:52:09.861350  238148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:52:09.943146  238148 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:52:09.975570  238148 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:52:10.239991  238148 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:52:10.273516  238148 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:52:10.274683  238148 kubeadm.go:319] 
	I1120 20:52:10.274779  238148 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:52:10.274786  238148 kubeadm.go:319] 
	I1120 20:52:10.274929  238148 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:52:10.274954  238148 kubeadm.go:319] 
	I1120 20:52:10.274986  238148 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:52:10.275062  238148 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:52:10.275128  238148 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:52:10.275142  238148 kubeadm.go:319] 
	I1120 20:52:10.275209  238148 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:52:10.275217  238148 kubeadm.go:319] 
	I1120 20:52:10.275277  238148 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:52:10.275286  238148 kubeadm.go:319] 
	I1120 20:52:10.275356  238148 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:52:10.275540  238148 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:52:10.275662  238148 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:52:10.275671  238148 kubeadm.go:319] 
	I1120 20:52:10.275820  238148 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:52:10.275972  238148 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:52:10.275996  238148 kubeadm.go:319] 
	I1120 20:52:10.276126  238148 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 16hbch.nehrzw8ak789mtyt \
	I1120 20:52:10.276260  238148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f \
	I1120 20:52:10.276290  238148 kubeadm.go:319] 	--control-plane 
	I1120 20:52:10.276294  238148 kubeadm.go:319] 
	I1120 20:52:10.276478  238148 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:52:10.276490  238148 kubeadm.go:319] 
	I1120 20:52:10.276600  238148 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 16hbch.nehrzw8ak789mtyt \
	I1120 20:52:10.276726  238148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f 
	I1120 20:52:10.278899  238148 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:52:10.279067  238148 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:52:10.279103  238148 cni.go:84] Creating CNI manager for ""
	I1120 20:52:10.279116  238148 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:10.280874  238148 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:52:08.472716  242858 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 20:52:08.472973  242858 start.go:159] libmachine.API.Create for "no-preload-480337" (driver="docker")
	I1120 20:52:08.473007  242858 client.go:173] LocalClient.Create starting
	I1120 20:52:08.473093  242858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem
	I1120 20:52:08.473142  242858 main.go:143] libmachine: Decoding PEM data...
	I1120 20:52:08.473162  242858 main.go:143] libmachine: Parsing certificate...
	I1120 20:52:08.473234  242858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem
	I1120 20:52:08.473265  242858 main.go:143] libmachine: Decoding PEM data...
	I1120 20:52:08.473278  242858 main.go:143] libmachine: Parsing certificate...
	I1120 20:52:08.473719  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 20:52:08.493293  242858 cli_runner.go:211] docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 20:52:08.493382  242858 network_create.go:284] running [docker network inspect no-preload-480337] to gather additional debugging logs...
	I1120 20:52:08.493408  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337
	W1120 20:52:08.510935  242858 cli_runner.go:211] docker network inspect no-preload-480337 returned with exit code 1
	I1120 20:52:08.510962  242858 network_create.go:287] error running [docker network inspect no-preload-480337]: docker network inspect no-preload-480337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-480337 not found
	I1120 20:52:08.510973  242858 network_create.go:289] output of [docker network inspect no-preload-480337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-480337 not found
	
	** /stderr **
	I1120 20:52:08.511058  242858 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:08.530865  242858 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5a901ca622c0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:53:dd:e9:bf:88} reservation:<nil>}
	I1120 20:52:08.531757  242858 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6594e2724ba2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:e6:72:df:4b:23} reservation:<nil>}
	I1120 20:52:08.532655  242858 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b5b02f2241a6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:6b:71:15:af:34} reservation:<nil>}
	I1120 20:52:08.533472  242858 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00234b7d0}
	I1120 20:52:08.533495  242858 network_create.go:124] attempt to create docker network no-preload-480337 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 20:52:08.533546  242858 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-480337 no-preload-480337
	I1120 20:52:08.585290  242858 network_create.go:108] docker network no-preload-480337 192.168.76.0/24 created
	I1120 20:52:08.585325  242858 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-480337" container
	I1120 20:52:08.585461  242858 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 20:52:08.597984  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 20:52:08.598001  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 20:52:08.604956  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1120 20:52:08.605315  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 20:52:08.606694  242858 cli_runner.go:164] Run: docker volume create no-preload-480337 --label name.minikube.sigs.k8s.io=no-preload-480337 --label created_by.minikube.sigs.k8s.io=true
	I1120 20:52:08.618534  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 20:52:08.621471  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1120 20:52:08.624491  242858 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 20:52:08.627070  242858 oci.go:103] Successfully created a docker volume no-preload-480337
	I1120 20:52:08.627146  242858 cli_runner.go:164] Run: docker run --rm --name no-preload-480337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-480337 --entrypoint /usr/bin/test -v no-preload-480337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 20:52:08.697577  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1120 20:52:08.697608  242858 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 251.842825ms
	I1120 20:52:08.697621  242858 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 20:52:09.049023  242858 oci.go:107] Successfully prepared a docker volume no-preload-480337
	I1120 20:52:09.049073  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1120 20:52:09.049166  242858 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 20:52:09.049225  242858 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 20:52:09.049280  242858 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 20:52:09.118960  242858 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-480337 --name no-preload-480337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-480337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-480337 --network no-preload-480337 --ip 192.168.76.2 --volume no-preload-480337:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 20:52:09.232948  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 20:52:09.232980  242858 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 787.2171ms
	I1120 20:52:09.232993  242858 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 20:52:09.453384  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Running}}
	I1120 20:52:09.472962  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:09.492439  242858 cli_runner.go:164] Run: docker exec no-preload-480337 stat /var/lib/dpkg/alternatives/iptables
	I1120 20:52:09.543100  242858 oci.go:144] the created container "no-preload-480337" has a running status.
	I1120 20:52:09.543125  242858 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa...
	I1120 20:52:09.964490  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 20:52:09.964524  242858 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.518724938s
	I1120 20:52:09.964550  242858 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 20:52:10.039948  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 20:52:10.039987  242858 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.594319275s
	I1120 20:52:10.040005  242858 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 20:52:10.093051  242858 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 20:52:10.094772  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 20:52:10.094803  242858 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.649119384s
	I1120 20:52:10.094827  242858 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 20:52:10.116809  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 20:52:10.116833  242858 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.671077498s
	I1120 20:52:10.116845  242858 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 20:52:10.123333  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:10.152798  242858 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 20:52:10.152827  242858 kic_runner.go:114] Args: [docker exec --privileged no-preload-480337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 20:52:10.201923  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:10.228150  242858 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:10.228251  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.253229  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.253812  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.253835  242858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:10.413054  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-480337
	
	I1120 20:52:10.413084  242858 ubuntu.go:182] provisioning hostname "no-preload-480337"
	I1120 20:52:10.413154  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.435191  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.435437  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.435454  242858 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-480337 && echo "no-preload-480337" | sudo tee /etc/hostname
	I1120 20:52:10.583828  242858 cache.go:157] /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 20:52:10.583853  242858 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.138065582s
	I1120 20:52:10.583869  242858 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 20:52:10.583886  242858 cache.go:87] Successfully saved all images to host disk.
	I1120 20:52:10.588140  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-480337
	
	I1120 20:52:10.588225  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.607575  242858 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.607874  242858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1120 20:52:10.607903  242858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480337/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:10.751208  242858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:10.751244  242858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3769/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3769/.minikube}
	I1120 20:52:10.751270  242858 ubuntu.go:190] setting up certificates
	I1120 20:52:10.751282  242858 provision.go:84] configureAuth start
	I1120 20:52:10.751356  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:10.773196  242858 provision.go:143] copyHostCerts
	I1120 20:52:10.773264  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem, removing ...
	I1120 20:52:10.773278  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem
	I1120 20:52:10.773380  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem (1082 bytes)
	I1120 20:52:10.773498  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem, removing ...
	I1120 20:52:10.773511  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem
	I1120 20:52:10.773555  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem (1123 bytes)
	I1120 20:52:10.773632  242858 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem, removing ...
	I1120 20:52:10.773642  242858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem
	I1120 20:52:10.773680  242858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem (1679 bytes)
	I1120 20:52:10.773754  242858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem org=jenkins.no-preload-480337 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-480337]
	I1120 20:52:10.929235  242858 provision.go:177] copyRemoteCerts
	I1120 20:52:10.929300  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:10.929359  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:10.949497  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.046944  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:52:11.070669  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:11.092589  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 20:52:11.112476  242858 provision.go:87] duration metric: took 361.173682ms to configureAuth
	I1120 20:52:11.112506  242858 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:11.112675  242858 config.go:182] Loaded profile config "no-preload-480337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:11.112687  242858 machine.go:97] duration metric: took 884.515372ms to provisionDockerMachine
	I1120 20:52:11.112693  242858 client.go:176] duration metric: took 2.639675681s to LocalClient.Create
	I1120 20:52:11.112713  242858 start.go:167] duration metric: took 2.639742922s to libmachine.API.Create "no-preload-480337"
	I1120 20:52:11.112722  242858 start.go:293] postStartSetup for "no-preload-480337" (driver="docker")
	I1120 20:52:11.112729  242858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:11.112769  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:11.112801  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.131812  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.231672  242858 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:11.235446  242858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:11.235470  242858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:11.235483  242858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/addons for local assets ...
	I1120 20:52:11.235540  242858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/files for local assets ...
	I1120 20:52:11.235608  242858 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem -> 77312.pem in /etc/ssl/certs
	I1120 20:52:11.235694  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:11.243934  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:52:11.264631  242858 start.go:296] duration metric: took 151.893896ms for postStartSetup
	I1120 20:52:11.265034  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:11.283078  242858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/config.json ...
	I1120 20:52:11.283417  242858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:11.283468  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.301828  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.397720  242858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:11.403267  242858 start.go:128] duration metric: took 2.932485378s to createHost
	I1120 20:52:11.403302  242858 start.go:83] releasing machines lock for "no-preload-480337", held for 2.932643607s
	I1120 20:52:11.403380  242858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-480337
	I1120 20:52:11.422042  242858 ssh_runner.go:195] Run: cat /version.json
	I1120 20:52:11.422066  242858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:11.422097  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.422125  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:11.440831  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.441126  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:11.585908  242858 ssh_runner.go:195] Run: systemctl --version
	I1120 20:52:11.592596  242858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:11.597797  242858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:11.597875  242858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:11.623700  242858 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:52:11.623720  242858 start.go:496] detecting cgroup driver to use...
	I1120 20:52:11.623747  242858 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:11.623815  242858 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 20:52:11.640937  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 20:52:11.655322  242858 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:11.655394  242858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:11.671618  242858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:11.689831  242858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:11.776195  242858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:11.859262  242858 docker.go:234] disabling docker service ...
	I1120 20:52:11.859326  242858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:11.877980  242858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:11.890841  242858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:11.974290  242858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:12.062916  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:12.075846  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:12.090429  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 20:52:12.101339  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 20:52:12.111481  242858 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1120 20:52:12.111532  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1120 20:52:12.120812  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:52:12.130543  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 20:52:12.140477  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:52:12.150476  242858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:12.158924  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 20:52:12.168516  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 20:52:12.177761  242858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 20:52:12.187179  242858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:12.194577  242858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:12.202081  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:12.283770  242858 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 20:52:12.353452  242858 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 20:52:12.353512  242858 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 20:52:12.357611  242858 start.go:564] Will wait 60s for crictl version
	I1120 20:52:12.357665  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.361283  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:52:12.386880  242858 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 20:52:12.386957  242858 ssh_runner.go:195] Run: containerd --version
	I1120 20:52:12.407763  242858 ssh_runner.go:195] Run: containerd --version
	I1120 20:52:12.431279  242858 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 20:52:12.432442  242858 cli_runner.go:164] Run: docker network inspect no-preload-480337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:12.450847  242858 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 20:52:12.455064  242858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:12.465598  242858 kubeadm.go:884] updating cluster {Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:52:12.465697  242858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:52:12.465730  242858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:12.489581  242858 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:52:12.489601  242858 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 20:52:12.489669  242858 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:12.489678  242858 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.489688  242858 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.489706  242858 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.489730  242858 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.489710  242858 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.489760  242858 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.489762  242858 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.491060  242858 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.491091  242858 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.491094  242858 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.491120  242858 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:12.491141  242858 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.491061  242858 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.491061  242858 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.491060  242858 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.617642  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1120 20:52:12.617722  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.621253  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1120 20:52:12.621301  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.625193  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1120 20:52:12.625252  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.635123  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1120 20:52:12.635217  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.643402  242858 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1120 20:52:12.643452  242858 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.643499  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.643799  242858 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1120 20:52:12.643834  242858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.643895  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.646580  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1120 20:52:12.646643  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.650546  242858 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1120 20:52:12.650587  242858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.650636  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.658434  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1120 20:52:12.658498  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1120 20:52:12.661446  242858 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1120 20:52:12.661506  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.661517  242858 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.661564  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.661601  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.670723  242858 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1120 20:52:12.670795  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.671031  242858 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1120 20:52:12.671088  242858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.671143  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.695733  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.695759  242858 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1120 20:52:12.695783  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.695786  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.695793  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.695794  242858 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1120 20:52:12.695800  242858 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 20:52:12.695822  242858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.695828  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.695835  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.695852  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:12.727921  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.727991  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 20:52:12.728037  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 20:52:12.728124  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.728128  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.728361  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.761629  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1120 20:52:12.761727  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:12.761733  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.761736  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 20:52:12.761970  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 20:52:12.762044  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:12.788758  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.788878  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.788907  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 20:52:12.791281  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 20:52:12.791321  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 20:52:12.791417  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 20:52:12.791441  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:12.791441  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1120 20:52:12.791314  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1120 20:52:12.791289  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 20:52:12.836623  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 20:52:12.836637  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 20:52:12.836653  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 20:52:12.836663  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1120 20:52:12.836625  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 20:52:12.836749  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:12.840681  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 20:52:12.840774  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:12.997385  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 20:52:12.997425  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1120 20:52:12.997436  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1120 20:52:12.997491  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 20:52:12.997522  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:12.997527  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 20:52:12.997544  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1120 20:52:12.997585  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:13.032735  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 20:52:13.032771  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1120 20:52:13.032744  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 20:52:13.032810  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1120 20:52:13.106714  242858 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:13.106787  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1120 20:52:10.282432  238148 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:52:10.288612  238148 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1120 20:52:10.288635  238148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:52:10.325924  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:52:11.009949  238148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:52:11.010040  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:11.010050  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-715005 minikube.k8s.io/updated_at=2025_11_20T20_52_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=old-k8s-version-715005 minikube.k8s.io/primary=true
	I1120 20:52:11.019529  238148 ops.go:34] apiserver oom_adj: -16
	I1120 20:52:11.082068  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:11.582258  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.082538  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.582696  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:13.082475  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:13.582190  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:14.082300  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:14.582719  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:12.588104  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:12.588162  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:13.257481  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1120 20:52:13.257518  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:13.257567  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 20:52:13.484484  242858 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1120 20:52:13.484551  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:14.260286  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.002694769s)
	I1120 20:52:14.260311  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 20:52:14.260327  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:14.260388  242858 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1120 20:52:14.260440  242858 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:14.260484  242858 ssh_runner.go:195] Run: which crictl
	I1120 20:52:14.260397  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 20:52:14.264886  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:15.172700  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1120 20:52:15.172730  242858 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:15.172775  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1120 20:52:15.172846  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:16.416639  242858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.243755408s)
	I1120 20:52:16.416707  242858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:16.416710  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.243913458s)
	I1120 20:52:16.416738  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 20:52:16.416763  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:16.416796  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 20:52:17.473434  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.056609494s)
	I1120 20:52:17.473462  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1120 20:52:17.473496  242858 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:17.473499  242858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.056765018s)
	I1120 20:52:17.473547  242858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1120 20:52:17.473572  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 20:52:17.473641  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:17.477792  242858 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1120 20:52:17.477828  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1120 20:52:15.082709  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:15.582847  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:16.082616  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:16.582607  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.082814  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.582328  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:18.082295  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:18.582141  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:19.082885  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:19.582520  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:17.589485  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:17.589528  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:20.082310  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:20.583143  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:21.082776  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:21.583013  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.082458  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.582097  238148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:22.687347  238148 kubeadm.go:1114] duration metric: took 11.677365094s to wait for elevateKubeSystemPrivileges
	I1120 20:52:22.687400  238148 kubeadm.go:403] duration metric: took 21.724823766s to StartCluster
	I1120 20:52:22.687423  238148 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:22.687501  238148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:22.689408  238148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:22.689743  238148 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:22.689876  238148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:52:22.690153  238148 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:52:22.690208  238148 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:22.690292  238148 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-715005"
	I1120 20:52:22.690318  238148 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-715005"
	I1120 20:52:22.690347  238148 host.go:66] Checking if "old-k8s-version-715005" exists ...
	I1120 20:52:22.690337  238148 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-715005"
	I1120 20:52:22.690418  238148 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-715005"
	I1120 20:52:22.690925  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.691054  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.691183  238148 out.go:179] * Verifying Kubernetes components...
	I1120 20:52:22.694668  238148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:22.721925  238148 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-715005"
	I1120 20:52:22.722060  238148 host.go:66] Checking if "old-k8s-version-715005" exists ...
	I1120 20:52:22.722655  238148 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:52:22.723628  238148 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:18.613862  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.140264773s)
	I1120 20:52:18.613889  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1120 20:52:18.613927  242858 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:18.613983  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1120 20:52:21.193703  242858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.579693858s)
	I1120 20:52:21.193738  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1120 20:52:21.193773  242858 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:21.193840  242858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1120 20:52:21.574262  242858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-3769/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1120 20:52:21.574305  242858 cache_images.go:125] Successfully loaded all cached images
	I1120 20:52:21.574312  242858 cache_images.go:94] duration metric: took 9.084699265s to LoadCachedImages
	I1120 20:52:21.574329  242858 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1120 20:52:21.574471  242858 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:52:21.574537  242858 ssh_runner.go:195] Run: sudo crictl info
	I1120 20:52:21.603319  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:21.603343  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:21.603377  242858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:52:21.603411  242858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480337 NodeName:no-preload-480337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:52:21.603588  242858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-480337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:52:21.603664  242858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:21.612556  242858 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1120 20:52:21.612621  242858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:21.620971  242858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1120 20:52:21.621063  242858 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1120 20:52:21.621075  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1120 20:52:21.621091  242858 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1120 20:52:21.625310  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1120 20:52:21.625358  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1120 20:52:22.261818  242858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:52:22.276106  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1120 20:52:22.280357  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1120 20:52:22.280403  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1120 20:52:22.550268  242858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1120 20:52:22.554488  242858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1120 20:52:22.554526  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1120 20:52:22.811251  242858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:52:22.821552  242858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 20:52:22.839319  242858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:52:22.860159  242858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1120 20:52:22.875158  242858 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:52:22.880468  242858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:22.892551  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:23.010633  242858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:23.045182  242858 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337 for IP: 192.168.76.2
	I1120 20:52:23.045210  242858 certs.go:195] generating shared ca certs ...
	I1120 20:52:23.045229  242858 certs.go:227] acquiring lock for ca certs: {Name:mk775617087d2732283088aad08819408765453b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.045401  242858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key
	I1120 20:52:23.045458  242858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key
	I1120 20:52:23.045474  242858 certs.go:257] generating profile certs ...
	I1120 20:52:23.045550  242858 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key
	I1120 20:52:23.045576  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt with IP's: []
	I1120 20:52:22.724959  238148 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:22.725017  238148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:52:22.725109  238148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-715005
	I1120 20:52:22.757195  238148 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:22.757229  238148 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:52:22.757287  238148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-715005
	I1120 20:52:22.766165  238148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/old-k8s-version-715005/id_rsa Username:docker}
	I1120 20:52:22.790360  238148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/old-k8s-version-715005/id_rsa Username:docker}
	I1120 20:52:22.826000  238148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:52:22.866259  238148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:22.884982  238148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:22.912703  238148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:23.113529  238148 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 20:52:23.115312  238148 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-715005" to be "Ready" ...
	I1120 20:52:23.319716  238148 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 20:52:23.320952  238148 addons.go:515] duration metric: took 630.739345ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 20:52:23.618249  238148 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-715005" context rescaled to 1 replicas
	I1120 20:52:22.590569  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:22.590637  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:23.272593  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt ...
	I1120 20:52:23.272629  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: {Name:mk7a84bdb8ce4d387a03a977e465f46901b9ecca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.272826  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key ...
	I1120 20:52:23.272846  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.key: {Name:mk85515619d0d5f42ade705dd7b83fa5c49d94e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.272962  242858 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87
	I1120 20:52:23.272987  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 20:52:23.487592  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 ...
	I1120 20:52:23.487619  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87: {Name:mkfd30b6222da006020eb33948c0ef334b323426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.487776  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87 ...
	I1120 20:52:23.487790  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87: {Name:mk44a5d802018397fb26ee24c50c7deaa57ff0c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.487872  242858 certs.go:382] copying /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt.3960ac87 -> /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt
	I1120 20:52:23.487948  242858 certs.go:386] copying /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key.3960ac87 -> /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key
	I1120 20:52:23.488013  242858 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key
	I1120 20:52:23.488033  242858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt with IP's: []
	I1120 20:52:23.785632  242858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt ...
	I1120 20:52:23.785658  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt: {Name:mk446faa52377df58cd5afc43090ee71e8db7eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.785816  242858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key ...
	I1120 20:52:23.785832  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key: {Name:mk31b320f6c39c68d8ce39cc9567e7b46fda7feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:23.786011  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem (1338 bytes)
	W1120 20:52:23.786063  242858 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731_empty.pem, impossibly tiny 0 bytes
	I1120 20:52:23.786073  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:52:23.786111  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:52:23.786134  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:52:23.786160  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem (1679 bytes)
	I1120 20:52:23.786198  242858 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:52:23.786744  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:52:23.806909  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:52:23.825261  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:52:23.843574  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:52:23.862110  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 20:52:23.880708  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:52:23.899197  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:52:23.917532  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 20:52:23.939072  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /usr/share/ca-certificates/77312.pem (1708 bytes)
	I1120 20:52:24.063298  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:52:24.128578  242858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem --> /usr/share/ca-certificates/7731.pem (1338 bytes)
	I1120 20:52:24.151285  242858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:52:24.164998  242858 ssh_runner.go:195] Run: openssl version
	I1120 20:52:24.171390  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.179204  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77312.pem /etc/ssl/certs/77312.pem
	I1120 20:52:24.187192  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.191320  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:26 /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.191430  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77312.pem
	I1120 20:52:24.227765  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:24.236942  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/77312.pem /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:24.245416  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.253726  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:52:24.261768  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.265899  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.265952  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:24.302264  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:52:24.310398  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:52:24.318255  242858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.326249  242858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7731.pem /etc/ssl/certs/7731.pem
	I1120 20:52:24.334629  242858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.338657  242858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:26 /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.338728  242858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7731.pem
	I1120 20:52:24.376799  242858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:52:24.385154  242858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7731.pem /etc/ssl/certs/51391683.0
	I1120 20:52:24.393234  242858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:52:24.397049  242858 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:52:24.397114  242858 kubeadm.go:401] StartCluster: {Name:no-preload-480337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-480337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:24.397194  242858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 20:52:24.397267  242858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:52:24.428423  242858 cri.go:89] found id: ""
	I1120 20:52:24.428487  242858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:52:24.438710  242858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:52:24.449299  242858 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 20:52:24.449375  242858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:52:24.459536  242858 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:52:24.459556  242858 kubeadm.go:158] found existing configuration files:
	
	I1120 20:52:24.459604  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:52:24.468144  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:52:24.468200  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:52:24.476815  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:52:24.486242  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:52:24.486304  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:52:24.496106  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:52:24.505724  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:52:24.505782  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:52:24.514672  242858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:52:24.524027  242858 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:52:24.524092  242858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:52:24.533975  242858 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 20:52:24.619419  242858 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:52:24.680410  242858 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1120 20:52:25.119561  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	W1120 20:52:27.643468  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:27.593264  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1120 20:52:27.593340  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:27.750894  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:35434->192.168.103.2:8443: read: connection reset by peer
	I1120 20:52:28.086453  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:28.086851  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:28.586485  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:28.586954  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:29.086442  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:29.086851  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:29.587389  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:29.587828  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1120 20:52:30.118581  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	W1120 20:52:32.618929  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:30.086483  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:30.086908  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:30.586449  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:30.586956  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:31.086463  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:31.086895  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:31.586541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:31.586971  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:32.086571  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:32.087046  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:32.586544  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:32.586991  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:33.086432  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:33.086879  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:33.586450  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:33.586927  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:34.086541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:34.086925  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:34.586458  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:34.586899  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:35.816708  242858 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:52:35.816800  242858 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:52:35.816948  242858 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 20:52:35.817027  242858 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 20:52:35.817097  242858 kubeadm.go:319] OS: Linux
	I1120 20:52:35.817148  242858 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 20:52:35.817194  242858 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 20:52:35.817243  242858 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 20:52:35.817320  242858 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 20:52:35.817383  242858 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 20:52:35.817442  242858 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 20:52:35.817491  242858 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 20:52:35.817568  242858 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 20:52:35.817637  242858 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:52:35.817722  242858 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:52:35.817809  242858 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:52:35.817887  242858 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:52:35.819194  242858 out.go:252]   - Generating certificates and keys ...
	I1120 20:52:35.819273  242858 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:52:35.819349  242858 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:52:35.819472  242858 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:52:35.819552  242858 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:52:35.819646  242858 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:52:35.819695  242858 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:52:35.819746  242858 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:52:35.819854  242858 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-480337] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 20:52:35.819903  242858 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:52:35.820019  242858 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-480337] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 20:52:35.820080  242858 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:52:35.820140  242858 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:52:35.820196  242858 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:52:35.820273  242858 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:52:35.820348  242858 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:52:35.820452  242858 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:52:35.820533  242858 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:52:35.820630  242858 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:52:35.820707  242858 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:52:35.820799  242858 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:52:35.820874  242858 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:52:35.822148  242858 out.go:252]   - Booting up control plane ...
	I1120 20:52:35.822225  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:52:35.822291  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:52:35.822359  242858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:52:35.822472  242858 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:52:35.822579  242858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:52:35.822729  242858 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:52:35.822820  242858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:52:35.822877  242858 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:52:35.823038  242858 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:52:35.823159  242858 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:52:35.823248  242858 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000889914s
	I1120 20:52:35.823363  242858 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:52:35.823499  242858 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 20:52:35.823658  242858 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:52:35.823780  242858 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:52:35.823893  242858 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.505586489s
	I1120 20:52:35.823986  242858 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.124627255s
	I1120 20:52:35.824101  242858 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00208951s
	I1120 20:52:35.824256  242858 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:52:35.824442  242858 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:52:35.824544  242858 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:52:35.824787  242858 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-480337 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:52:35.824867  242858 kubeadm.go:319] [bootstrap-token] Using token: kimko6.d8ifdar0sarfgkue
	I1120 20:52:35.826341  242858 out.go:252]   - Configuring RBAC rules ...
	I1120 20:52:35.826458  242858 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:52:35.826533  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:52:35.826671  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:52:35.826791  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:52:35.826891  242858 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:52:35.826964  242858 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:52:35.827060  242858 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:52:35.827108  242858 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:52:35.827153  242858 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:52:35.827163  242858 kubeadm.go:319] 
	I1120 20:52:35.827221  242858 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:52:35.827227  242858 kubeadm.go:319] 
	I1120 20:52:35.827301  242858 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:52:35.827308  242858 kubeadm.go:319] 
	I1120 20:52:35.827329  242858 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:52:35.827400  242858 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:52:35.827444  242858 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:52:35.827450  242858 kubeadm.go:319] 
	I1120 20:52:35.827494  242858 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:52:35.827500  242858 kubeadm.go:319] 
	I1120 20:52:35.827540  242858 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:52:35.827545  242858 kubeadm.go:319] 
	I1120 20:52:35.827590  242858 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:52:35.827671  242858 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:52:35.827770  242858 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:52:35.827778  242858 kubeadm.go:319] 
	I1120 20:52:35.827890  242858 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:52:35.827978  242858 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:52:35.827985  242858 kubeadm.go:319] 
	I1120 20:52:35.828060  242858 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kimko6.d8ifdar0sarfgkue \
	I1120 20:52:35.828152  242858 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f \
	I1120 20:52:35.828172  242858 kubeadm.go:319] 	--control-plane 
	I1120 20:52:35.828177  242858 kubeadm.go:319] 
	I1120 20:52:35.828263  242858 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:52:35.828270  242858 kubeadm.go:319] 
	I1120 20:52:35.828355  242858 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kimko6.d8ifdar0sarfgkue \
	I1120 20:52:35.828521  242858 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6363bf4687e9474b61ef24181dbec602d7e15f5bf816f1e3fd72b87e3c0c983f 
	I1120 20:52:35.828535  242858 cni.go:84] Creating CNI manager for ""
	I1120 20:52:35.828540  242858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:52:35.829886  242858 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:52:35.831052  242858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:52:35.835504  242858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 20:52:35.835522  242858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:52:35.848645  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:52:36.063646  242858 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:52:36.063726  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:36.063745  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-480337 minikube.k8s.io/updated_at=2025_11_20T20_52_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-480337 minikube.k8s.io/primary=true
	I1120 20:52:36.154515  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:36.154568  242858 ops.go:34] apiserver oom_adj: -16
	I1120 20:52:36.655018  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:37.154622  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:37.654645  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:38.155458  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 20:52:35.118798  238148 node_ready.go:57] node "old-k8s-version-715005" has "Ready":"False" status (will retry)
	I1120 20:52:36.119441  238148 node_ready.go:49] node "old-k8s-version-715005" is "Ready"
	I1120 20:52:36.119474  238148 node_ready.go:38] duration metric: took 13.004118914s for node "old-k8s-version-715005" to be "Ready" ...
	I1120 20:52:36.119492  238148 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:52:36.119550  238148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:52:36.136261  238148 api_server.go:72] duration metric: took 13.446468406s to wait for apiserver process to appear ...
	I1120 20:52:36.136286  238148 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:52:36.136303  238148 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 20:52:36.142400  238148 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 20:52:36.143959  238148 api_server.go:141] control plane version: v1.28.0
	I1120 20:52:36.143990  238148 api_server.go:131] duration metric: took 7.697032ms to wait for apiserver health ...
	I1120 20:52:36.144000  238148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:52:36.148683  238148 system_pods.go:59] 8 kube-system pods found
	I1120 20:52:36.148739  238148 system_pods.go:61] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.148757  238148 system_pods.go:61] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.148764  238148 system_pods.go:61] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.148769  238148 system_pods.go:61] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.148783  238148 system_pods.go:61] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.148787  238148 system_pods.go:61] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.148793  238148 system_pods.go:61] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.148814  238148 system_pods.go:61] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:52:36.148824  238148 system_pods.go:74] duration metric: took 4.816269ms to wait for pod list to return data ...
	I1120 20:52:36.148839  238148 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:52:36.151480  238148 default_sa.go:45] found service account: "default"
	I1120 20:52:36.151502  238148 default_sa.go:55] duration metric: took 2.6562ms for default service account to be created ...
	I1120 20:52:36.151511  238148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:52:36.155002  238148 system_pods.go:86] 8 kube-system pods found
	I1120 20:52:36.155030  238148 system_pods.go:89] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.155037  238148 system_pods.go:89] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.155044  238148 system_pods.go:89] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.155050  238148 system_pods.go:89] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.155055  238148 system_pods.go:89] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.155059  238148 system_pods.go:89] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.155070  238148 system_pods.go:89] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.155077  238148 system_pods.go:89] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:52:36.155118  238148 retry.go:31] will retry after 266.136492ms: missing components: kube-dns
	I1120 20:52:36.426029  238148 system_pods.go:86] 8 kube-system pods found
	I1120 20:52:36.426097  238148 system_pods.go:89] "coredns-5dd5756b68-mptgs" [2c198f77-2da3-4dc0-98f2-5263299ec40b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:52:36.426109  238148 system_pods.go:89] "etcd-old-k8s-version-715005" [0bf088f9-234a-4a72-9d1b-d6f088300b75] Running
	I1120 20:52:36.426121  238148 system_pods.go:89] "kindnet-cfz75" [0042d6a2-8643-46e3-902b-f53060fcf7d2] Running
	I1120 20:52:36.426128  238148 system_pods.go:89] "kube-apiserver-old-k8s-version-715005" [8e225071-07c8-4edf-859f-88b2e5001f12] Running
	I1120 20:52:36.426137  238148 system_pods.go:89] "kube-controller-manager-old-k8s-version-715005" [57c8b5dd-c382-44a4-b0d8-6daff8243ac0] Running
	I1120 20:52:36.426146  238148 system_pods.go:89] "kube-proxy-4pnqq" [b58b571d-f605-4fd4-8afa-d17455aaaaab] Running
	I1120 20:52:36.426151  238148 system_pods.go:89] "kube-scheduler-old-k8s-version-715005" [31fcfd0d-7579-4237-96e8-08202f831aa8] Running
	I1120 20:52:36.426156  238148 system_pods.go:89] "storage-provisioner" [6af79ed2-0bd8-44f7-a2bb-8e7788cf7111] Running
	I1120 20:52:36.426166  238148 system_pods.go:126] duration metric: took 274.648335ms to wait for k8s-apps to be running ...
	I1120 20:52:36.426174  238148 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:52:36.426226  238148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:52:36.440582  238148 system_svc.go:56] duration metric: took 14.395654ms WaitForService to wait for kubelet
	I1120 20:52:36.440618  238148 kubeadm.go:587] duration metric: took 13.750832492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:36.440642  238148 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:52:36.443487  238148 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:52:36.443516  238148 node_conditions.go:123] node cpu capacity is 8
	I1120 20:52:36.443534  238148 node_conditions.go:105] duration metric: took 2.886705ms to run NodePressure ...
	I1120 20:52:36.443549  238148 start.go:242] waiting for startup goroutines ...
	I1120 20:52:36.443558  238148 start.go:247] waiting for cluster config update ...
	I1120 20:52:36.443570  238148 start.go:256] writing updated cluster config ...
	I1120 20:52:36.443910  238148 ssh_runner.go:195] Run: rm -f paused
	I1120 20:52:36.447835  238148 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:52:36.451653  238148 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mptgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.457441  238148 pod_ready.go:94] pod "coredns-5dd5756b68-mptgs" is "Ready"
	I1120 20:52:37.457465  238148 pod_ready.go:86] duration metric: took 1.005790774s for pod "coredns-5dd5756b68-mptgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.460607  238148 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.464240  238148 pod_ready.go:94] pod "etcd-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.464258  238148 pod_ready.go:86] duration metric: took 3.632649ms for pod "etcd-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.466560  238148 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.469867  238148 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.469885  238148 pod_ready.go:86] duration metric: took 3.300833ms for pod "kube-apiserver-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.472108  238148 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.655747  238148 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-715005" is "Ready"
	I1120 20:52:37.655772  238148 pod_ready.go:86] duration metric: took 183.642109ms for pod "kube-controller-manager-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:37.856083  238148 pod_ready.go:83] waiting for pod "kube-proxy-4pnqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.255430  238148 pod_ready.go:94] pod "kube-proxy-4pnqq" is "Ready"
	I1120 20:52:38.255490  238148 pod_ready.go:86] duration metric: took 399.383229ms for pod "kube-proxy-4pnqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.456007  238148 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.855855  238148 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-715005" is "Ready"
	I1120 20:52:38.855880  238148 pod_ready.go:86] duration metric: took 399.852833ms for pod "kube-scheduler-old-k8s-version-715005" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:52:38.855890  238148 pod_ready.go:40] duration metric: took 2.408021676s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:52:38.898974  238148 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1120 20:52:38.900810  238148 out.go:203] 
	W1120 20:52:38.902141  238148 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 20:52:38.903278  238148 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 20:52:38.904757  238148 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-715005" cluster and "default" namespace by default
	I1120 20:52:35.086860  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:35.087261  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:35.587416  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:35.587799  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:36.087455  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:36.087855  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:36.586439  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:36.586878  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:37.086442  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:37.086847  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:37.587357  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:37.587842  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.086405  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:38.086807  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.586903  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:38.587307  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:39.086541  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:39.086974  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:39.586441  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:39.586902  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:38.655596  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:39.155616  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:39.655601  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:40.154636  242858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:52:40.220222  242858 kubeadm.go:1114] duration metric: took 4.156549094s to wait for elevateKubeSystemPrivileges
	I1120 20:52:40.220261  242858 kubeadm.go:403] duration metric: took 15.823151044s to StartCluster
	I1120 20:52:40.220283  242858 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:40.220356  242858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:52:40.221736  242858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:40.221992  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:52:40.222016  242858 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:40.221988  242858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:52:40.222107  242858 addons.go:70] Setting storage-provisioner=true in profile "no-preload-480337"
	I1120 20:52:40.222123  242858 addons.go:239] Setting addon storage-provisioner=true in "no-preload-480337"
	I1120 20:52:40.222150  242858 host.go:66] Checking if "no-preload-480337" exists ...
	I1120 20:52:40.222183  242858 addons.go:70] Setting default-storageclass=true in profile "no-preload-480337"
	I1120 20:52:40.222205  242858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480337"
	I1120 20:52:40.222208  242858 config.go:182] Loaded profile config "no-preload-480337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:52:40.222552  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.222707  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.223621  242858 out.go:179] * Verifying Kubernetes components...
	I1120 20:52:40.224838  242858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:40.245867  242858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:52:40.246538  242858 addons.go:239] Setting addon default-storageclass=true in "no-preload-480337"
	I1120 20:52:40.246583  242858 host.go:66] Checking if "no-preload-480337" exists ...
	I1120 20:52:40.246851  242858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:40.246867  242858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:52:40.246921  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:40.247059  242858 cli_runner.go:164] Run: docker container inspect no-preload-480337 --format={{.State.Status}}
	I1120 20:52:40.280143  242858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:40.280169  242858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:52:40.280238  242858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-480337
	I1120 20:52:40.282336  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:40.308080  242858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/no-preload-480337/id_rsa Username:docker}
	I1120 20:52:40.319537  242858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:52:40.366219  242858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:40.400839  242858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:52:40.419278  242858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:52:40.488010  242858 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 20:52:40.489308  242858 node_ready.go:35] waiting up to 6m0s for node "no-preload-480337" to be "Ready" ...
	I1120 20:52:40.705813  242858 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 20:52:40.707763  242858 addons.go:515] duration metric: took 485.74699ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 20:52:40.992476  242858 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-480337" context rescaled to 1 replicas
	W1120 20:52:42.491841  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	I1120 20:52:40.086449  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:40.086895  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:40.586439  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:40.586951  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:41.087144  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:41.087603  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:41.587035  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:41.587526  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:42.087212  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:42.087656  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:42.587397  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:42.587795  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:43.086420  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:43.086825  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:43.586409  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:43.586769  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:44.087148  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:44.087553  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:44.587022  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:44.587465  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1120 20:52:44.493002  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	W1120 20:52:46.991763  242858 node_ready.go:57] node "no-preload-480337" has "Ready":"False" status (will retry)
	I1120 20:52:45.087033  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:45.087501  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:45.587188  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:45.587598  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:46.087402  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1120 20:52:46.087490  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 20:52:46.113986  231112 cri.go:89] found id: "05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207"
	I1120 20:52:46.114011  231112 cri.go:89] found id: "db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	I1120 20:52:46.114017  231112 cri.go:89] found id: ""
	I1120 20:52:46.114025  231112 logs.go:282] 2 containers: [05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2]
	I1120 20:52:46.114081  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:46.118161  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:46.121804  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1120 20:52:46.121865  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 20:52:46.147183  231112 cri.go:89] found id: "94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d"
	I1120 20:52:46.147202  231112 cri.go:89] found id: ""
	I1120 20:52:46.147209  231112 logs.go:282] 1 containers: [94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d]
	I1120 20:52:46.147274  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:46.151212  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1120 20:52:46.151263  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 20:52:46.177079  231112 cri.go:89] found id: ""
	I1120 20:52:46.177104  231112 logs.go:282] 0 containers: []
	W1120 20:52:46.177115  231112 logs.go:284] No container was found matching "coredns"
	I1120 20:52:46.177122  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1120 20:52:46.177173  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 20:52:46.202904  231112 cri.go:89] found id: "0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:52:46.202924  231112 cri.go:89] found id: ""
	I1120 20:52:46.202931  231112 logs.go:282] 1 containers: [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25]
	I1120 20:52:46.202987  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:46.207002  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1120 20:52:46.207072  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 20:52:46.232402  231112 cri.go:89] found id: ""
	I1120 20:52:46.232429  231112 logs.go:282] 0 containers: []
	W1120 20:52:46.232439  231112 logs.go:284] No container was found matching "kube-proxy"
	I1120 20:52:46.232445  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 20:52:46.232504  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 20:52:46.258402  231112 cri.go:89] found id: "f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9"
	I1120 20:52:46.258424  231112 cri.go:89] found id: ""
	I1120 20:52:46.258434  231112 logs.go:282] 1 containers: [f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9]
	I1120 20:52:46.258490  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:46.262337  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1120 20:52:46.262416  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 20:52:46.286721  231112 cri.go:89] found id: ""
	I1120 20:52:46.286747  231112 logs.go:282] 0 containers: []
	W1120 20:52:46.286757  231112 logs.go:284] No container was found matching "kindnet"
	I1120 20:52:46.286764  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1120 20:52:46.286820  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 20:52:46.312487  231112 cri.go:89] found id: ""
	I1120 20:52:46.312510  231112 logs.go:282] 0 containers: []
	W1120 20:52:46.312532  231112 logs.go:284] No container was found matching "storage-provisioner"
	I1120 20:52:46.312549  231112 logs.go:123] Gathering logs for kubelet ...
	I1120 20:52:46.312562  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 20:52:46.373147  231112 logs.go:123] Gathering logs for dmesg ...
	I1120 20:52:46.373183  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 20:52:46.386704  231112 logs.go:123] Gathering logs for kube-apiserver [db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2] ...
	I1120 20:52:46.386729  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	I1120 20:52:46.420352  231112 logs.go:123] Gathering logs for etcd [94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d] ...
	I1120 20:52:46.420408  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d"
	I1120 20:52:46.456347  231112 logs.go:123] Gathering logs for kube-scheduler [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25] ...
	I1120 20:52:46.456396  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:52:46.490690  231112 logs.go:123] Gathering logs for kube-controller-manager [f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9] ...
	I1120 20:52:46.490718  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9"
	I1120 20:52:46.523130  231112 logs.go:123] Gathering logs for containerd ...
	I1120 20:52:46.523162  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1120 20:52:46.559022  231112 logs.go:123] Gathering logs for describe nodes ...
	I1120 20:52:46.559052  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 20:52:46.617093  231112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 20:52:46.617118  231112 logs.go:123] Gathering logs for kube-apiserver [05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207] ...
	I1120 20:52:46.617135  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207"
	I1120 20:52:46.650184  231112 logs.go:123] Gathering logs for container status ...
	I1120 20:52:46.650212  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 20:52:49.181191  231112 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 20:52:49.181691  231112 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1120 20:52:49.181738  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1120 20:52:49.181791  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 20:52:49.214258  231112 cri.go:89] found id: "05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207"
	I1120 20:52:49.214281  231112 cri.go:89] found id: "db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	I1120 20:52:49.214287  231112 cri.go:89] found id: ""
	I1120 20:52:49.214295  231112 logs.go:282] 2 containers: [05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2]
	I1120 20:52:49.214377  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:49.218936  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:49.223293  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1120 20:52:49.223350  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 20:52:49.253568  231112 cri.go:89] found id: "94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d"
	I1120 20:52:49.253588  231112 cri.go:89] found id: ""
	I1120 20:52:49.253596  231112 logs.go:282] 1 containers: [94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d]
	I1120 20:52:49.253666  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:49.257651  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1120 20:52:49.257703  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 20:52:49.283091  231112 cri.go:89] found id: ""
	I1120 20:52:49.283122  231112 logs.go:282] 0 containers: []
	W1120 20:52:49.283129  231112 logs.go:284] No container was found matching "coredns"
	I1120 20:52:49.283136  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1120 20:52:49.283193  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 20:52:49.311309  231112 cri.go:89] found id: "0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:52:49.311330  231112 cri.go:89] found id: ""
	I1120 20:52:49.311338  231112 logs.go:282] 1 containers: [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25]
	I1120 20:52:49.311417  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:49.315535  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1120 20:52:49.315606  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 20:52:49.341159  231112 cri.go:89] found id: ""
	I1120 20:52:49.341184  231112 logs.go:282] 0 containers: []
	W1120 20:52:49.341192  231112 logs.go:284] No container was found matching "kube-proxy"
	I1120 20:52:49.341198  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 20:52:49.341249  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 20:52:49.369151  231112 cri.go:89] found id: "f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9"
	I1120 20:52:49.369179  231112 cri.go:89] found id: ""
	I1120 20:52:49.369189  231112 logs.go:282] 1 containers: [f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9]
	I1120 20:52:49.369244  231112 ssh_runner.go:195] Run: which crictl
	I1120 20:52:49.374014  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1120 20:52:49.374078  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 20:52:49.400789  231112 cri.go:89] found id: ""
	I1120 20:52:49.400814  231112 logs.go:282] 0 containers: []
	W1120 20:52:49.400823  231112 logs.go:284] No container was found matching "kindnet"
	I1120 20:52:49.400830  231112 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1120 20:52:49.400886  231112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 20:52:49.429406  231112 cri.go:89] found id: ""
	I1120 20:52:49.429435  231112 logs.go:282] 0 containers: []
	W1120 20:52:49.429444  231112 logs.go:284] No container was found matching "storage-provisioner"
	I1120 20:52:49.429460  231112 logs.go:123] Gathering logs for container status ...
	I1120 20:52:49.429472  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 20:52:49.461608  231112 logs.go:123] Gathering logs for kubelet ...
	I1120 20:52:49.461637  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 20:52:49.534466  231112 logs.go:123] Gathering logs for describe nodes ...
	I1120 20:52:49.534499  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 20:52:49.599942  231112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 20:52:49.600005  231112 logs.go:123] Gathering logs for etcd [94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d] ...
	I1120 20:52:49.600023  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94f186f635b9bd6bdc55877c985aae746f486e851f1808379c9916dee256ed9d"
	I1120 20:52:49.642769  231112 logs.go:123] Gathering logs for kube-scheduler [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25] ...
	I1120 20:52:49.642794  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:52:49.680474  231112 logs.go:123] Gathering logs for kube-controller-manager [f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9] ...
	I1120 20:52:49.680499  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0ce5ca33777dc7f8ba525834fa064cdbe8da953cf81814678ec72666138b5a9"
	I1120 20:52:49.719530  231112 logs.go:123] Gathering logs for containerd ...
	I1120 20:52:49.719581  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1120 20:52:49.756004  231112 logs.go:123] Gathering logs for dmesg ...
	I1120 20:52:49.756046  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 20:52:49.770602  231112 logs.go:123] Gathering logs for kube-apiserver [05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207] ...
	I1120 20:52:49.770635  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 05ee122fe6b0de50eeadfb319bf3df6fb4af9da42fc0b91e6b8a28ed08017207"
	I1120 20:52:49.805680  231112 logs.go:123] Gathering logs for kube-apiserver [db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2] ...
	I1120 20:52:49.805726  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	cff0ed278d62d       56cc512116c8f       10 seconds ago      Running             busybox                   0                   71df75dca1dd7       busybox                                          default
	b238eb5506919       ead0a4a53df89       14 seconds ago      Running             coredns                   0                   3b8b46903b404       coredns-5dd5756b68-mptgs                         kube-system
	30d0f5bdd9f8b       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   dad0efe137051       storage-provisioner                              kube-system
	31461353b2022       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   0e6a4046c3e58       kindnet-cfz75                                    kube-system
	cd89c06d39abe       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   d1996aaa95795       kube-proxy-4pnqq                                 kube-system
	129c92b2baf2f       4be79c38a4bab       45 seconds ago      Running             kube-controller-manager   0                   37e7b3b214e2f       kube-controller-manager-old-k8s-version-715005   kube-system
	51b9d78e7f6a4       f6f496300a2ae       45 seconds ago      Running             kube-scheduler            0                   659db2c56a9ce       kube-scheduler-old-k8s-version-715005            kube-system
	9b9c74b02fcb4       bb5e0dde9054c       45 seconds ago      Running             kube-apiserver            0                   046251feeeba0       kube-apiserver-old-k8s-version-715005            kube-system
	bd79d6bd69267       73deb9a3f7025       45 seconds ago      Running             etcd                      0                   0ccf99ccdb55a       etcd-old-k8s-version-715005                      kube-system
	
	
	==> containerd <==
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.331054404Z" level=info msg="CreateContainer within sandbox \"dad0efe1370515d6a5e283f690b5861af819ca7c438225b3992c0fcc85ae50b6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.331514371Z" level=info msg="StartContainer for \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.332524517Z" level=info msg="connecting to shim 30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5" address="unix:///run/containerd/s/f85382523371363a580faab823f4564ef702cb91dd77ece3725ccb1af7d38b25" protocol=ttrpc version=3
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.334597791Z" level=info msg="CreateContainer within sandbox \"3b8b46903b404473bef4a273a4ab27ff906ec052ea45e4b4212bd43b455cdbd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.335057955Z" level=info msg="StartContainer for \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\""
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.335820086Z" level=info msg="connecting to shim b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212" address="unix:///run/containerd/s/801daa28e941e0441ab99e0b93ec314b977136497b29ce8e8c5cb393ef1573e3" protocol=ttrpc version=3
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.383725768Z" level=info msg="StartContainer for \"b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212\" returns successfully"
	Nov 20 20:52:36 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:36.384073634Z" level=info msg="StartContainer for \"30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5\" returns successfully"
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.364312535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a1d0e8f-ce19-4ac1-bea8-96d6e879131e,Namespace:default,Attempt:0,}"
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.408963097Z" level=info msg="connecting to shim 71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08" address="unix:///run/containerd/s/ada5c2f6fb8ba3beb99f5d6ca5c34f6ee268100be418787584e6f9aad68bf647" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.484006268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a1d0e8f-ce19-4ac1-bea8-96d6e879131e,Namespace:default,Attempt:0,} returns sandbox id \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\""
	Nov 20 20:52:39 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:39.485812554Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.847165187Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.847932562Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.849388484Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.850951671Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.851443868Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.365589938s"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.851488669Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.853206452Z" level=info msg="CreateContainer within sandbox \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.860224048Z" level=info msg="Container cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.865791220Z" level=info msg="CreateContainer within sandbox \"71df75dca1dd7b6b576f14d9ba6b5539f9f6f882cc9da670c1b41fd83dcc5c08\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.866334070Z" level=info msg="StartContainer for \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\""
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.867211058Z" level=info msg="connecting to shim cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a" address="unix:///run/containerd/s/ada5c2f6fb8ba3beb99f5d6ca5c34f6ee268100be418787584e6f9aad68bf647" protocol=ttrpc version=3
	Nov 20 20:52:40 old-k8s-version-715005 containerd[661]: time="2025-11-20T20:52:40.920084522Z" level=info msg="StartContainer for \"cff0ed278d62dd9ed10cae5e5f96874eb05c5e603320748af2dfb6ef4f86494a\" returns successfully"
	Nov 20 20:52:48 old-k8s-version-715005 containerd[661]: E1120 20:52:48.135214     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [b238eb5506919601ee7f82047857465eb95fdc7e8c4184d95c6a62098235f212] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44202 - 26657 "HINFO IN 3488307865202641534.2109671425240872498. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.449966311s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-715005
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-715005
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-715005
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_52_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:52:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-715005
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:52:41 +0000   Thu, 20 Nov 2025 20:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-715005
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                81d39874-f554-4f8e-9c90-bef57a66d9b2
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-mptgs                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-715005                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-cfz75                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-715005             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-715005    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-4pnqq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-715005             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-715005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-715005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-715005 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-715005 event: Registered Node old-k8s-version-715005 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-715005 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bd79d6bd6926714eb9fe7608d919a6bea130b15fb4cba41cc3d774f5a9ab2a7e] <==
	{"level":"info","ts":"2025-11-20T20:52:05.229987Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T20:52:05.230074Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:52:05.230114Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:52:05.918054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-20T20:52:05.918206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.918227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T20:52:05.919055Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.919714Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-715005 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:52:05.919751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T20:52:05.919774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T20:52:05.920218Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T20:52:05.920249Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T20:52:05.920456Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.920676Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.920973Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T20:52:05.921153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T20:52:05.923247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T20:52:28.821816Z","caller":"traceutil/trace.go:171","msg":"trace[1388972908] linearizableReadLoop","detail":"{readStateIndex:393; appliedIndex:392; }","duration":"204.166324ms","start":"2025-11-20T20:52:28.617625Z","end":"2025-11-20T20:52:28.821792Z","steps":["trace[1388972908] 'read index received'  (duration: 127.067096ms)","trace[1388972908] 'applied index is now lower than readState.Index'  (duration: 77.098386ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:52:28.821848Z","caller":"traceutil/trace.go:171","msg":"trace[50526453] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"205.569936ms","start":"2025-11-20T20:52:28.616251Z","end":"2025-11-20T20:52:28.821821Z","steps":["trace[50526453] 'process raft request'  (duration: 128.495071ms)","trace[50526453] 'compare'  (duration: 76.913082ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:52:28.822044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.39432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-715005\" ","response":"range_response_count:1 size:4738"}
	{"level":"info","ts":"2025-11-20T20:52:28.822092Z","caller":"traceutil/trace.go:171","msg":"trace[1042929095] range","detail":"{range_begin:/registry/minions/old-k8s-version-715005; range_end:; response_count:1; response_revision:378; }","duration":"204.491416ms","start":"2025-11-20T20:52:28.617589Z","end":"2025-11-20T20:52:28.822081Z","steps":["trace[1042929095] 'agreement among raft nodes before linearized reading'  (duration: 204.292562ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:52:51 up 35 min,  0 user,  load average: 3.54, 3.02, 1.94
	Linux old-k8s-version-715005 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31461353b202200468aa23f3972e4e462db51e670ff467500d67a4a3bf84828c] <==
	I1120 20:52:25.577955       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:52:25.595537       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 20:52:25.595672       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:52:25.595689       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:52:25.595717       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:52:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:52:25.799345       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:52:25.799395       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:52:25.799411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:52:25.799764       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:52:26.195526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:52:26.195566       1 metrics.go:72] Registering metrics
	I1120 20:52:26.195655       1 controller.go:711] "Syncing nftables rules"
	I1120 20:52:35.807032       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:52:35.807104       1 main.go:301] handling current node
	I1120 20:52:45.799883       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:52:45.799943       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b9c74b02fcb4b147d54a9f31669c3eaf326a38bd4dcd1194a2c0d07d79aaca1] <==
	I1120 20:52:07.075686       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 20:52:07.076243       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 20:52:07.077707       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 20:52:07.078484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 20:52:07.078627       1 aggregator.go:166] initial CRD sync complete...
	I1120 20:52:07.078643       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 20:52:07.078650       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:52:07.078659       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:52:07.113958       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:07.990626       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:52:07.994280       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:52:07.994300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:52:08.407696       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:52:08.442971       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:52:08.587450       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:52:08.593189       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 20:52:08.594181       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 20:52:08.598322       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:09.034088       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 20:52:10.221091       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 20:52:10.238151       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:52:10.251155       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:22.642746       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:22.642831       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:22.798006       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [129c92b2baf2f5d973e359f010839efa78cf975a381962dd3873c5fa1d291869] <==
	I1120 20:52:22.089891       1 shared_informer.go:318] Caches are synced for deployment
	I1120 20:52:22.097778       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 20:52:22.410232       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 20:52:22.486848       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 20:52:22.486886       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 20:52:22.654034       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4pnqq"
	I1120 20:52:22.655412       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cfz75"
	I1120 20:52:22.803882       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 20:52:22.898093       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hnbwt"
	I1120 20:52:22.906310       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mptgs"
	I1120 20:52:22.916699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.923624ms"
	I1120 20:52:22.927435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.677803ms"
	I1120 20:52:22.951258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.764186ms"
	I1120 20:52:22.951454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.492µs"
	I1120 20:52:23.145279       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 20:52:23.157612       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hnbwt"
	I1120 20:52:23.166328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.800363ms"
	I1120 20:52:23.172401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.997974ms"
	I1120 20:52:23.172562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.526µs"
	I1120 20:52:35.907119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.527µs"
	I1120 20:52:35.922303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.211µs"
	I1120 20:52:36.421114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.801µs"
	I1120 20:52:36.836379       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1120 20:52:37.420026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.210833ms"
	I1120 20:52:37.420110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.049µs"
	
	
	==> kube-proxy [cd89c06d39abe013aea89d98c9df900a06c30cb2d739e0a9660b3d6b845006f2] <==
	I1120 20:52:23.316357       1 server_others.go:69] "Using iptables proxy"
	I1120 20:52:23.325940       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 20:52:23.346710       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:23.348974       1 server_others.go:152] "Using iptables Proxier"
	I1120 20:52:23.349015       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 20:52:23.349021       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 20:52:23.349053       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 20:52:23.349270       1 server.go:846] "Version info" version="v1.28.0"
	I1120 20:52:23.349284       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:23.350668       1 config.go:188] "Starting service config controller"
	I1120 20:52:23.350707       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 20:52:23.350735       1 config.go:97] "Starting endpoint slice config controller"
	I1120 20:52:23.350739       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 20:52:23.351421       1 config.go:315] "Starting node config controller"
	I1120 20:52:23.351457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 20:52:23.450834       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 20:52:23.450857       1 shared_informer.go:318] Caches are synced for service config
	I1120 20:52:23.452185       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [51b9d78e7f6a4ddaf97aa93f6a3303b88d8ea9c948782289642216f4875377d6] <==
	W1120 20:52:07.043502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 20:52:07.043581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1120 20:52:07.043601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1120 20:52:07.043728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 20:52:07.043770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 20:52:07.043792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 20:52:07.044129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 20:52:07.044156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1120 20:52:07.044351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 20:52:07.044412       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 20:52:07.881440       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 20:52:07.881475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 20:52:07.903871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 20:52:07.903902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 20:52:07.933445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 20:52:07.933478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 20:52:08.001403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 20:52:08.001449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 20:52:08.038750       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1120 20:52:08.038792       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 20:52:08.103503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 20:52:08.103539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 20:52:08.137088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 20:52:08.137130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1120 20:52:11.239121       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 20:52:21 old-k8s-version-715005 kubelet[1553]: I1120 20:52:21.975542    1553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.664556    1553 topology_manager.go:215] "Topology Admit Handler" podUID="0042d6a2-8643-46e3-902b-f53060fcf7d2" podNamespace="kube-system" podName="kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.665169    1553 topology_manager.go:215] "Topology Admit Handler" podUID="b58b571d-f605-4fd4-8afa-d17455aaaaab" podNamespace="kube-system" podName="kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679738    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b58b571d-f605-4fd4-8afa-d17455aaaaab-kube-proxy\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679779    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-xtables-lock\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679797    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-lib-modules\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679815    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvpc\" (UniqueName: \"kubernetes.io/projected/0042d6a2-8643-46e3-902b-f53060fcf7d2-kube-api-access-sfvpc\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679837    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b58b571d-f605-4fd4-8afa-d17455aaaaab-lib-modules\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679855    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0042d6a2-8643-46e3-902b-f53060fcf7d2-cni-cfg\") pod \"kindnet-cfz75\" (UID: \"0042d6a2-8643-46e3-902b-f53060fcf7d2\") " pod="kube-system/kindnet-cfz75"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679871    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b58b571d-f605-4fd4-8afa-d17455aaaaab-xtables-lock\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:22 old-k8s-version-715005 kubelet[1553]: I1120 20:52:22.679888    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59kdw\" (UniqueName: \"kubernetes.io/projected/b58b571d-f605-4fd4-8afa-d17455aaaaab-kube-api-access-59kdw\") pod \"kube-proxy-4pnqq\" (UID: \"b58b571d-f605-4fd4-8afa-d17455aaaaab\") " pod="kube-system/kube-proxy-4pnqq"
	Nov 20 20:52:23 old-k8s-version-715005 kubelet[1553]: I1120 20:52:23.378345    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4pnqq" podStartSLOduration=1.378305026 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:23.378202335 +0000 UTC m=+13.190629566" watchObservedRunningTime="2025-11-20 20:52:23.378305026 +0000 UTC m=+13.190732254"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.883329    1553 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907064    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cfz75" podStartSLOduration=11.958094777 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="2025-11-20 20:52:23.304498177 +0000 UTC m=+13.116925406" lastFinishedPulling="2025-11-20 20:52:25.253410661 +0000 UTC m=+15.065837882" observedRunningTime="2025-11-20 20:52:26.388782718 +0000 UTC m=+16.201209947" watchObservedRunningTime="2025-11-20 20:52:35.907007253 +0000 UTC m=+25.719434486"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907524    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2c198f77-2da3-4dc0-98f2-5263299ec40b" podNamespace="kube-system" podName="coredns-5dd5756b68-mptgs"
	Nov 20 20:52:35 old-k8s-version-715005 kubelet[1553]: I1120 20:52:35.907700    1553 topology_manager.go:215] "Topology Admit Handler" podUID="6af79ed2-0bd8-44f7-a2bb-8e7788cf7111" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082664    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm78s\" (UniqueName: \"kubernetes.io/projected/2c198f77-2da3-4dc0-98f2-5263299ec40b-kube-api-access-xm78s\") pod \"coredns-5dd5756b68-mptgs\" (UID: \"2c198f77-2da3-4dc0-98f2-5263299ec40b\") " pod="kube-system/coredns-5dd5756b68-mptgs"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082744    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78q9l\" (UniqueName: \"kubernetes.io/projected/6af79ed2-0bd8-44f7-a2bb-8e7788cf7111-kube-api-access-78q9l\") pod \"storage-provisioner\" (UID: \"6af79ed2-0bd8-44f7-a2bb-8e7788cf7111\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082779    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c198f77-2da3-4dc0-98f2-5263299ec40b-config-volume\") pod \"coredns-5dd5756b68-mptgs\" (UID: \"2c198f77-2da3-4dc0-98f2-5263299ec40b\") " pod="kube-system/coredns-5dd5756b68-mptgs"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.082808    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6af79ed2-0bd8-44f7-a2bb-8e7788cf7111-tmp\") pod \"storage-provisioner\" (UID: \"6af79ed2-0bd8-44f7-a2bb-8e7788cf7111\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.410010    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.409958355 podCreationTimestamp="2025-11-20 20:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.409796502 +0000 UTC m=+26.222223732" watchObservedRunningTime="2025-11-20 20:52:36.409958355 +0000 UTC m=+26.222385586"
	Nov 20 20:52:36 old-k8s-version-715005 kubelet[1553]: I1120 20:52:36.421317    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mptgs" podStartSLOduration=14.42125597 podCreationTimestamp="2025-11-20 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.421053425 +0000 UTC m=+26.233480850" watchObservedRunningTime="2025-11-20 20:52:36.42125597 +0000 UTC m=+26.233683201"
	Nov 20 20:52:39 old-k8s-version-715005 kubelet[1553]: I1120 20:52:39.055805    1553 topology_manager.go:215] "Topology Admit Handler" podUID="3a1d0e8f-ce19-4ac1-bea8-96d6e879131e" podNamespace="default" podName="busybox"
	Nov 20 20:52:39 old-k8s-version-715005 kubelet[1553]: I1120 20:52:39.201304    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkb2\" (UniqueName: \"kubernetes.io/projected/3a1d0e8f-ce19-4ac1-bea8-96d6e879131e-kube-api-access-djkb2\") pod \"busybox\" (UID: \"3a1d0e8f-ce19-4ac1-bea8-96d6e879131e\") " pod="default/busybox"
	Nov 20 20:52:41 old-k8s-version-715005 kubelet[1553]: I1120 20:52:41.425214    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.058833545 podCreationTimestamp="2025-11-20 20:52:39 +0000 UTC" firstStartedPulling="2025-11-20 20:52:39.485462465 +0000 UTC m=+29.297889678" lastFinishedPulling="2025-11-20 20:52:40.851802022 +0000 UTC m=+30.664229235" observedRunningTime="2025-11-20 20:52:41.424674281 +0000 UTC m=+31.237101511" watchObservedRunningTime="2025-11-20 20:52:41.425173102 +0000 UTC m=+31.237600331"
	
	
	==> storage-provisioner [30d0f5bdd9f8bfd2c0796639f0ed8e490844e6c98a2754a2c49f7959c1a1f2a5] <==
	I1120 20:52:36.391585       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:52:36.400469       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:52:36.400520       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 20:52:36.407549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:52:36.407928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e5947a7-1f12-4fc5-bee8-e5a8d2f00419", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92 became leader
	I1120 20:52:36.407957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92!
	I1120 20:52:36.509166       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-715005_3ea8dec4-ab4d-4039-9450-ca2b7352ce92!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-715005 -n old-k8s-version-715005
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-715005 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (12.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480337 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [73993c68-9404-4e81-9899-9c821e232fe0] Pending
helpers_test.go:352: "busybox" [73993c68-9404-4e81-9899-9c821e232fe0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1120 20:52:58.313333    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [73993c68-9404-4e81-9899-9c821e232fe0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003172893s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480337 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-480337
helpers_test.go:243: (dbg) docker inspect no-preload-480337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08",
	        "Created": "2025-11-20T20:52:09.138725872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:52:09.181663936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/hosts",
	        "LogPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08-json.log",
	        "Name": "/no-preload-480337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-480337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-480337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08",
	                "LowerDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-480337",
	                "Source": "/var/lib/docker/volumes/no-preload-480337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-480337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-480337",
	                "name.minikube.sigs.k8s.io": "no-preload-480337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f23e0fa410fb69f65978de4e27cccd21bd8042a98524e0d8acae086a9db819a7",
	            "SandboxKey": "/var/run/docker/netns/f23e0fa410fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-480337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5936e0858d4ec3cd6661a8389806e79259b6549629c90a4690cc3af923bfb781",
	                    "EndpointID": "68b642719950492c1a0342070fabeeacc89ef66b25ea28b44a90c7be10477470",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:0e:91:cc:a9:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-480337",
	                        "fd4a82214f10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480337 -n no-preload-480337
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480337 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p stopped-upgrade-058944                                                                                                                                                                                                                           │ stopped-upgrade-058944    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p missing-upgrade-670521                                                                                                                                                                                                                           │ missing-upgrade-670521    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ stop    │ -p kubernetes-upgrade-902531                                                                                                                                                                                                                        │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ force-systemd-flag-431737 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p force-systemd-flag-431737                                                                                                                                                                                                                        │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ cert-options-636195 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ -p cert-options-636195 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ delete  │ -p cert-options-636195                                                                                                                                                                                                                              │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337         │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-715005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ stop    │ -p old-k8s-version-715005 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-715005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:53:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:53:04.955240  250643 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:53:04.955518  250643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:04.955529  250643 out.go:374] Setting ErrFile to fd 2...
	I1120 20:53:04.955533  250643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:04.955780  250643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:53:04.956246  250643 out.go:368] Setting JSON to false
	I1120 20:53:04.957350  250643 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2137,"bootTime":1763669848,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:53:04.957452  250643 start.go:143] virtualization: kvm guest
	I1120 20:53:04.959477  250643 out.go:179] * [old-k8s-version-715005] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:53:04.960815  250643 notify.go:221] Checking for updates...
	I1120 20:53:04.960842  250643 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:53:04.962251  250643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:53:04.963527  250643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:53:04.964758  250643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:53:04.966123  250643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:53:04.967475  250643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:53:04.969092  250643 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:53:04.970738  250643 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 20:53:04.971884  250643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:53:04.995793  250643 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:53:04.995876  250643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:53:05.055233  250643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:53:05.045912441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:53:05.055333  250643 docker.go:319] overlay module found
	I1120 20:53:05.057086  250643 out.go:179] * Using the docker driver based on existing profile
	I1120 20:53:05.058134  250643 start.go:309] selected driver: docker
	I1120 20:53:05.058148  250643 start.go:930] validating driver "docker" against &{Name:old-k8s-version-715005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-715005 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:53:05.058223  250643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:53:05.058796  250643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:53:05.116178  250643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:53:05.106203104 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:53:05.116584  250643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:53:05.116624  250643 cni.go:84] Creating CNI manager for ""
	I1120 20:53:05.116685  250643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:53:05.116729  250643 start.go:353] cluster config:
	{Name:old-k8s-version-715005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-715005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:53:05.118553  250643 out.go:179] * Starting "old-k8s-version-715005" primary control-plane node in "old-k8s-version-715005" cluster
	I1120 20:53:05.119739  250643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:53:05.120868  250643 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:53:05.121933  250643 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 20:53:05.121972  250643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1120 20:53:05.121982  250643 cache.go:65] Caching tarball of preloaded images
	I1120 20:53:05.122029  250643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:53:05.122083  250643 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:53:05.122098  250643 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1120 20:53:05.122221  250643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/config.json ...
	I1120 20:53:05.143967  250643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:53:05.143996  250643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:53:05.144016  250643 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:53:05.144047  250643 start.go:360] acquireMachinesLock for old-k8s-version-715005: {Name:mk6d734c47b8a2456a3028a5fd9d7e5f37b6c200 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:53:05.144120  250643 start.go:364] duration metric: took 49.087µs to acquireMachinesLock for "old-k8s-version-715005"
	I1120 20:53:05.144143  250643 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:53:05.144153  250643 fix.go:54] fixHost starting: 
	I1120 20:53:05.144485  250643 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:53:05.162767  250643 fix.go:112] recreateIfNeeded on old-k8s-version-715005: state=Stopped err=<nil>
	W1120 20:53:05.162820  250643 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	09500288f4eef       56cc512116c8f       8 seconds ago       Running             busybox                   0                   cdd504cb3e5b7       busybox                                     default
	b4478575a63a0       52546a367cc9e       12 seconds ago      Running             coredns                   0                   c8b1d59946639       coredns-66bc5c9577-74j8f                    kube-system
	4df157c5b9a1b       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   96844142fbdf1       storage-provisioner                         kube-system
	7f01e7c9c1262       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   263769be47688       kindnet-rs8fb                               kube-system
	b1294ef176345       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   f73fc18d5c060       kube-proxy-hq4z4                            kube-system
	e708bbdcc7593       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   2bd1ce6c97035       kube-apiserver-no-preload-480337            kube-system
	eaf8540bacb9b       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   cf1c9f691c128       kube-controller-manager-no-preload-480337   kube-system
	ca79851f9ec6e       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   887e71b2b160a       etcd-no-preload-480337                      kube-system
	e8633b508491c       7dd6aaa1717ab       35 seconds ago      Running             kube-scheduler            0                   b7cd548c619dd       kube-scheduler-no-preload-480337            kube-system
	
	
	==> containerd <==
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.735298710Z" level=info msg="connecting to shim 4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e" address="unix:///run/containerd/s/8b763e24f95b2c74fa99bf1278596727e884234624574a02861ec071eb5979b3" protocol=ttrpc version=3
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.736268324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74j8f,Uid:a8f2defc-f970-4247-9872-a87af62a388d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.740920218Z" level=info msg="CreateContainer within sandbox \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.748096784Z" level=info msg="Container b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.753829415Z" level=info msg="CreateContainer within sandbox \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.754209745Z" level=info msg="StartContainer for \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.754980527Z" level=info msg="connecting to shim b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a" address="unix:///run/containerd/s/a9ca642480783df863d87e9f4123e535b0450d3a3d3bf669398431adb87b6dda" protocol=ttrpc version=3
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.784454742Z" level=info msg="StartContainer for \"4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e\" returns successfully"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.808317383Z" level=info msg="StartContainer for \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\" returns successfully"
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.584981353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73993c68-9404-4e81-9899-9c821e232fe0,Namespace:default,Attempt:0,}"
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.626011549Z" level=info msg="connecting to shim cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d" address="unix:///run/containerd/s/68274697c3f276ba72843671db39a989ec50dbad6eb7c2b03afbe913dba47a59" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.694978424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73993c68-9404-4e81-9899-9c821e232fe0,Namespace:default,Attempt:0,} returns sandbox id \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\""
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.696742434Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.114501996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.115321242Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.116596410Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.118585277Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.118984591Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.422206081s"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.119021788Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.122630420Z" level=info msg="CreateContainer within sandbox \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.130078782Z" level=info msg="Container 09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.135851978Z" level=info msg="CreateContainer within sandbox \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.136348402Z" level=info msg="StartContainer for \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.137065900Z" level=info msg="connecting to shim 09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7" address="unix:///run/containerd/s/68274697c3f276ba72843671db39a989ec50dbad6eb7c2b03afbe913dba47a59" protocol=ttrpc version=3
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.186386008Z" level=info msg="StartContainer for \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\" returns successfully"
	
	
	==> coredns [b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55708 - 32488 "HINFO IN 517935307456418888.262216089366449874. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.115997151s
	
	
	==> describe nodes <==
	Name:               no-preload-480337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-480337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_52_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:52:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480337
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:53:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-480337
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                8c93adcf-5fa7-4496-af0d-3f9d0b8a36e2
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-74j8f                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-480337                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-rs8fb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-480337             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-480337    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-hq4z4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-480337             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-480337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-480337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-480337 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node no-preload-480337 event: Registered Node no-preload-480337 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-480337 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ca79851f9ec6eba515f44a678a409b86f9ffaefc6ef521b75cbd706f58efe2d7] <==
	{"level":"warn","ts":"2025-11-20T20:52:31.887125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.895926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.902916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.909519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.915448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.921509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.927523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.933550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.940459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.950485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.956514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.962791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.968934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.975341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.981449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.988183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.995307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.001714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.009365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.016078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.022454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.039784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.045834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.052177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.099866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:53:06 up 35 min,  0 user,  load average: 2.82, 2.89, 1.91
	Linux no-preload-480337 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f01e7c9c1262c8403572b71719012200db3087935ddbf656824e897fb0c81e8] <==
	I1120 20:52:42.731878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:52:42.823948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 20:52:42.824122       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:52:42.824143       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:52:42.824172       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:52:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:52:43.026673       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:52:43.026723       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:52:43.026735       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:52:43.026884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:52:43.328272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:52:43.328299       1 metrics.go:72] Registering metrics
	I1120 20:52:43.328383       1 controller.go:711] "Syncing nftables rules"
	I1120 20:52:53.028518       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:52:53.028581       1 main.go:301] handling current node
	I1120 20:53:03.027871       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:53:03.027940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e708bbdcc75932b9a84fb152833406ce1f6a3925f9662f51b2dc05ee52ac3725] <==
	E1120 20:52:32.638176       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1120 20:52:32.666706       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:52:32.668907       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:52:32.668977       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:32.672490       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:32.672890       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:52:32.841768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:33.468830       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:52:33.472194       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:52:33.472212       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:52:33.877332       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:52:33.909700       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:52:33.974426       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:52:33.980142       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 20:52:33.981200       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:52:33.984897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:34.496119       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:52:35.215916       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:52:35.223529       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:52:35.230171       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:39.747536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:52:40.251971       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:40.551469       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:40.556329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 20:53:05.400833       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36130: use of closed network connection
	
	
	==> kube-controller-manager [eaf8540bacb9b6e74a3a5c6e34efd399f0587139bbb76e4aab8ec5549d16affc] <==
	I1120 20:52:39.460545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:39.460563       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:52:39.460585       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:52:39.495052       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:52:39.495118       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:52:39.495064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 20:52:39.495185       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:52:39.495249       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 20:52:39.495284       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:52:39.495653       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:52:39.495687       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 20:52:39.495694       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:52:39.495889       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:52:39.495990       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 20:52:39.496157       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:52:39.496357       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:52:39.496493       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:52:39.500716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:39.500725       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:52:39.503008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:39.506185       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:52:39.512427       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:52:39.516732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:39.517781       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 20:52:54.446988       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b1294ef176345ab99bdd9feed8402827c8d5f233d25dc8b19ada146c4b73e67a] <==
	I1120 20:52:40.957887       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:52:41.029140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:52:41.129350       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:52:41.129412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 20:52:41.129498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:52:41.153450       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:41.153515       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:52:41.159182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:52:41.159617       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:52:41.159652       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:41.162726       1 config.go:200] "Starting service config controller"
	I1120 20:52:41.163051       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:52:41.162763       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:52:41.163090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:52:41.162779       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:52:41.163105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:52:41.162866       1 config.go:309] "Starting node config controller"
	I1120 20:52:41.163123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:52:41.163129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:52:41.263240       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:52:41.263274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:52:41.263277       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e8633b508491c1ab67646d920e8865cc1ac787413de1f54f443bdaabb4fc7109] <==
	E1120 20:52:32.525409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:52:32.525432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:52:32.525502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:52:32.526396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:52:32.526632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:52:32.526595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:52:32.526518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:52:32.526658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:52:32.526689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:52:32.526855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:52:32.526938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:52:32.526985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:52:32.527034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:52:32.527158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:52:32.527203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:52:33.392701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:52:33.395980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:52:33.433580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:52:33.443657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:52:33.524455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:52:33.554587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:52:33.566617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:52:33.628877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:52:33.704970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1120 20:52:35.522172       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.099223    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-480337" podStartSLOduration=1.0992013 podStartE2EDuration="1.0992013s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.09900424 +0000 UTC m=+1.138303166" watchObservedRunningTime="2025-11-20 20:52:36.0992013 +0000 UTC m=+1.138500227"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.118809    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-480337" podStartSLOduration=1.118791258 podStartE2EDuration="1.118791258s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.108353204 +0000 UTC m=+1.147652130" watchObservedRunningTime="2025-11-20 20:52:36.118791258 +0000 UTC m=+1.158090185"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.130539    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-480337" podStartSLOduration=2.130517344 podStartE2EDuration="2.130517344s" podCreationTimestamp="2025-11-20 20:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.118952783 +0000 UTC m=+1.158251715" watchObservedRunningTime="2025-11-20 20:52:36.130517344 +0000 UTC m=+1.169816271"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.143423    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-480337" podStartSLOduration=1.14335727 podStartE2EDuration="1.14335727s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.130984248 +0000 UTC m=+1.170283175" watchObservedRunningTime="2025-11-20 20:52:36.14335727 +0000 UTC m=+1.182656197"
	Nov 20 20:52:39 no-preload-480337 kubelet[2186]: I1120 20:52:39.485345    2186 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:52:39 no-preload-480337 kubelet[2186]: I1120 20:52:39.486055    2186 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360653    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da75f7f4-afa3-4976-98d9-0574067ce59e-lib-modules\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360712    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-cni-cfg\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360740    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-xtables-lock\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360767    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-lib-modules\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360793    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfr5f\" (UniqueName: \"kubernetes.io/projected/53878683-6bf6-4530-ae0c-4d20a240db94-kube-api-access-jfr5f\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360821    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hntx\" (UniqueName: \"kubernetes.io/projected/da75f7f4-afa3-4976-98d9-0574067ce59e-kube-api-access-6hntx\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360844    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da75f7f4-afa3-4976-98d9-0574067ce59e-kube-proxy\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360867    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da75f7f4-afa3-4976-98d9-0574067ce59e-xtables-lock\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:43 no-preload-480337 kubelet[2186]: I1120 20:52:43.102760    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hq4z4" podStartSLOduration=3.102738361 podStartE2EDuration="3.102738361s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:41.096718694 +0000 UTC m=+6.136017620" watchObservedRunningTime="2025-11-20 20:52:43.102738361 +0000 UTC m=+8.142037289"
	Nov 20 20:52:48 no-preload-480337 kubelet[2186]: I1120 20:52:48.532944    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rs8fb" podStartSLOduration=7.104820189 podStartE2EDuration="8.532925934s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="2025-11-20 20:52:41.046946138 +0000 UTC m=+6.086245051" lastFinishedPulling="2025-11-20 20:52:42.475051876 +0000 UTC m=+7.514350796" observedRunningTime="2025-11-20 20:52:43.102919502 +0000 UTC m=+8.142218424" watchObservedRunningTime="2025-11-20 20:52:48.532925934 +0000 UTC m=+13.572224861"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.277594    2186 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355149    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c1e8b5fa-dff6-41a8-afd9-c996cfda92a4-tmp\") pod \"storage-provisioner\" (UID: \"c1e8b5fa-dff6-41a8-afd9-c996cfda92a4\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355194    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qjj\" (UniqueName: \"kubernetes.io/projected/c1e8b5fa-dff6-41a8-afd9-c996cfda92a4-kube-api-access-l6qjj\") pod \"storage-provisioner\" (UID: \"c1e8b5fa-dff6-41a8-afd9-c996cfda92a4\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355211    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bkvh\" (UniqueName: \"kubernetes.io/projected/a8f2defc-f970-4247-9872-a87af62a388d-kube-api-access-7bkvh\") pod \"coredns-66bc5c9577-74j8f\" (UID: \"a8f2defc-f970-4247-9872-a87af62a388d\") " pod="kube-system/coredns-66bc5c9577-74j8f"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355227    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8f2defc-f970-4247-9872-a87af62a388d-config-volume\") pod \"coredns-66bc5c9577-74j8f\" (UID: \"a8f2defc-f970-4247-9872-a87af62a388d\") " pod="kube-system/coredns-66bc5c9577-74j8f"
	Nov 20 20:52:54 no-preload-480337 kubelet[2186]: I1120 20:52:54.139887    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-74j8f" podStartSLOduration=14.139827486 podStartE2EDuration="14.139827486s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:54.12743428 +0000 UTC m=+19.166733207" watchObservedRunningTime="2025-11-20 20:52:54.139827486 +0000 UTC m=+19.179126410"
	Nov 20 20:52:54 no-preload-480337 kubelet[2186]: I1120 20:52:54.151000    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.150985801 podStartE2EDuration="14.150985801s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:54.150510637 +0000 UTC m=+19.189809564" watchObservedRunningTime="2025-11-20 20:52:54.150985801 +0000 UTC m=+19.190284726"
	Nov 20 20:52:56 no-preload-480337 kubelet[2186]: I1120 20:52:56.370874    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42zk\" (UniqueName: \"kubernetes.io/projected/73993c68-9404-4e81-9899-9c821e232fe0-kube-api-access-p42zk\") pod \"busybox\" (UID: \"73993c68-9404-4e81-9899-9c821e232fe0\") " pod="default/busybox"
	Nov 20 20:52:59 no-preload-480337 kubelet[2186]: I1120 20:52:59.138726    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.715301712 podStartE2EDuration="3.138697429s" podCreationTimestamp="2025-11-20 20:52:56 +0000 UTC" firstStartedPulling="2025-11-20 20:52:56.696395638 +0000 UTC m=+21.735694549" lastFinishedPulling="2025-11-20 20:52:58.11979136 +0000 UTC m=+23.159090266" observedRunningTime="2025-11-20 20:52:59.138546404 +0000 UTC m=+24.177845334" watchObservedRunningTime="2025-11-20 20:52:59.138697429 +0000 UTC m=+24.177996355"
	
	
	==> storage-provisioner [4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e] <==
	I1120 20:52:53.793797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:52:53.801186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:52:53.801244       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:52:53.803463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:53.808939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:52:53.809251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:52:53.809361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb50d925-0e04-4d42-8ce6-2e966809ec0a", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c became leader
	I1120 20:52:53.809475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c!
	W1120 20:52:53.811890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:53.815004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:52:53.910554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c!
	W1120 20:52:55.818464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:55.823164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:57.826649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:57.831142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:59.833907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:59.838818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:01.842224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:01.846185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:03.849548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:03.853515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:05.855809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:05.860241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480337 -n no-preload-480337
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-480337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-480337
helpers_test.go:243: (dbg) docker inspect no-preload-480337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08",
	        "Created": "2025-11-20T20:52:09.138725872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:52:09.181663936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/hosts",
	        "LogPath": "/var/lib/docker/containers/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08/fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08-json.log",
	        "Name": "/no-preload-480337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-480337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-480337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd4a82214f100da275f3744fb74e80b192d9bac2b1eed8547b3d4f0b9763ea08",
	                "LowerDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b21230f56d7e9b7727d29c6cb887e491b7269a6d221d969e78b28a54ec654ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-480337",
	                "Source": "/var/lib/docker/volumes/no-preload-480337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-480337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-480337",
	                "name.minikube.sigs.k8s.io": "no-preload-480337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f23e0fa410fb69f65978de4e27cccd21bd8042a98524e0d8acae086a9db819a7",
	            "SandboxKey": "/var/run/docker/netns/f23e0fa410fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-480337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5936e0858d4ec3cd6661a8389806e79259b6549629c90a4690cc3af923bfb781",
	                    "EndpointID": "68b642719950492c1a0342070fabeeacc89ef66b25ea28b44a90c7be10477470",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:0e:91:cc:a9:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-480337",
	                        "fd4a82214f10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480337 -n no-preload-480337
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480337 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p stopped-upgrade-058944                                                                                                                                                                                                                           │ stopped-upgrade-058944    │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p missing-upgrade-670521                                                                                                                                                                                                                           │ missing-upgrade-670521    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ stop    │ -p kubernetes-upgrade-902531                                                                                                                                                                                                                        │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-902531 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ ssh     │ -p NoKubernetes-666907 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ delete  │ -p NoKubernetes-666907                                                                                                                                                                                                                              │ NoKubernetes-666907       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ force-systemd-flag-431737 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ delete  │ -p force-systemd-flag-431737                                                                                                                                                                                                                        │ force-systemd-flag-431737 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ cert-options-636195 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ ssh     │ -p cert-options-636195 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ delete  │ -p cert-options-636195                                                                                                                                                                                                                              │ cert-options-636195       │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337         │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-715005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:52 UTC │
	│ stop    │ -p old-k8s-version-715005 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-715005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005    │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:53:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:53:04.955240  250643 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:53:04.955518  250643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:04.955529  250643 out.go:374] Setting ErrFile to fd 2...
	I1120 20:53:04.955533  250643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:04.955780  250643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:53:04.956246  250643 out.go:368] Setting JSON to false
	I1120 20:53:04.957350  250643 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2137,"bootTime":1763669848,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:53:04.957452  250643 start.go:143] virtualization: kvm guest
	I1120 20:53:04.959477  250643 out.go:179] * [old-k8s-version-715005] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:53:04.960815  250643 notify.go:221] Checking for updates...
	I1120 20:53:04.960842  250643 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:53:04.962251  250643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:53:04.963527  250643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:53:04.964758  250643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:53:04.966123  250643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:53:04.967475  250643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:53:04.969092  250643 config.go:182] Loaded profile config "old-k8s-version-715005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 20:53:04.970738  250643 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 20:53:04.971884  250643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:53:04.995793  250643 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:53:04.995876  250643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:53:05.055233  250643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:53:05.045912441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:53:05.055333  250643 docker.go:319] overlay module found
	I1120 20:53:05.057086  250643 out.go:179] * Using the docker driver based on existing profile
	I1120 20:53:05.058134  250643 start.go:309] selected driver: docker
	I1120 20:53:05.058148  250643 start.go:930] validating driver "docker" against &{Name:old-k8s-version-715005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-715005 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:53:05.058223  250643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:53:05.058796  250643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:53:05.116178  250643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:53:05.106203104 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:53:05.116584  250643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:53:05.116624  250643 cni.go:84] Creating CNI manager for ""
	I1120 20:53:05.116685  250643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:53:05.116729  250643 start.go:353] cluster config:
	{Name:old-k8s-version-715005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-715005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:53:05.118553  250643 out.go:179] * Starting "old-k8s-version-715005" primary control-plane node in "old-k8s-version-715005" cluster
	I1120 20:53:05.119739  250643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:53:05.120868  250643 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:53:05.121933  250643 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 20:53:05.121972  250643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1120 20:53:05.121982  250643 cache.go:65] Caching tarball of preloaded images
	I1120 20:53:05.122029  250643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:53:05.122083  250643 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:53:05.122098  250643 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1120 20:53:05.122221  250643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/config.json ...
	I1120 20:53:05.143967  250643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:53:05.143996  250643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:53:05.144016  250643 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:53:05.144047  250643 start.go:360] acquireMachinesLock for old-k8s-version-715005: {Name:mk6d734c47b8a2456a3028a5fd9d7e5f37b6c200 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:53:05.144120  250643 start.go:364] duration metric: took 49.087µs to acquireMachinesLock for "old-k8s-version-715005"
	I1120 20:53:05.144143  250643 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:53:05.144153  250643 fix.go:54] fixHost starting: 
	I1120 20:53:05.144485  250643 cli_runner.go:164] Run: docker container inspect old-k8s-version-715005 --format={{.State.Status}}
	I1120 20:53:05.162767  250643 fix.go:112] recreateIfNeeded on old-k8s-version-715005: state=Stopped err=<nil>
	W1120 20:53:05.162820  250643 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	09500288f4eef       56cc512116c8f       9 seconds ago       Running             busybox                   0                   cdd504cb3e5b7       busybox                                     default
	b4478575a63a0       52546a367cc9e       14 seconds ago      Running             coredns                   0                   c8b1d59946639       coredns-66bc5c9577-74j8f                    kube-system
	4df157c5b9a1b       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   96844142fbdf1       storage-provisioner                         kube-system
	7f01e7c9c1262       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   263769be47688       kindnet-rs8fb                               kube-system
	b1294ef176345       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   f73fc18d5c060       kube-proxy-hq4z4                            kube-system
	e708bbdcc7593       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   2bd1ce6c97035       kube-apiserver-no-preload-480337            kube-system
	eaf8540bacb9b       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   cf1c9f691c128       kube-controller-manager-no-preload-480337   kube-system
	ca79851f9ec6e       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   887e71b2b160a       etcd-no-preload-480337                      kube-system
	e8633b508491c       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   b7cd548c619dd       kube-scheduler-no-preload-480337            kube-system
	
	
	==> containerd <==
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.735298710Z" level=info msg="connecting to shim 4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e" address="unix:///run/containerd/s/8b763e24f95b2c74fa99bf1278596727e884234624574a02861ec071eb5979b3" protocol=ttrpc version=3
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.736268324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74j8f,Uid:a8f2defc-f970-4247-9872-a87af62a388d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.740920218Z" level=info msg="CreateContainer within sandbox \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.748096784Z" level=info msg="Container b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.753829415Z" level=info msg="CreateContainer within sandbox \"c8b1d59946639909ea4f832dea4520ffa17c1d75d1a02faa80612f7ff5f8042b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.754209745Z" level=info msg="StartContainer for \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\""
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.754980527Z" level=info msg="connecting to shim b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a" address="unix:///run/containerd/s/a9ca642480783df863d87e9f4123e535b0450d3a3d3bf669398431adb87b6dda" protocol=ttrpc version=3
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.784454742Z" level=info msg="StartContainer for \"4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e\" returns successfully"
	Nov 20 20:52:53 no-preload-480337 containerd[656]: time="2025-11-20T20:52:53.808317383Z" level=info msg="StartContainer for \"b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a\" returns successfully"
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.584981353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73993c68-9404-4e81-9899-9c821e232fe0,Namespace:default,Attempt:0,}"
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.626011549Z" level=info msg="connecting to shim cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d" address="unix:///run/containerd/s/68274697c3f276ba72843671db39a989ec50dbad6eb7c2b03afbe913dba47a59" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.694978424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73993c68-9404-4e81-9899-9c821e232fe0,Namespace:default,Attempt:0,} returns sandbox id \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\""
	Nov 20 20:52:56 no-preload-480337 containerd[656]: time="2025-11-20T20:52:56.696742434Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.114501996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.115321242Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.116596410Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.118585277Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.118984591Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.422206081s"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.119021788Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.122630420Z" level=info msg="CreateContainer within sandbox \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.130078782Z" level=info msg="Container 09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.135851978Z" level=info msg="CreateContainer within sandbox \"cdd504cb3e5b7b4c04fca3d6a966f6010f2fcc8f99937e43232b10cc9d904a4d\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.136348402Z" level=info msg="StartContainer for \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\""
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.137065900Z" level=info msg="connecting to shim 09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7" address="unix:///run/containerd/s/68274697c3f276ba72843671db39a989ec50dbad6eb7c2b03afbe913dba47a59" protocol=ttrpc version=3
	Nov 20 20:52:58 no-preload-480337 containerd[656]: time="2025-11-20T20:52:58.186386008Z" level=info msg="StartContainer for \"09500288f4eefbc8ec1bc6e0e541c0f16d066bc8b27ea156d977b0557be4c6a7\" returns successfully"
	
	
	==> coredns [b4478575a63a0e55e4993d841616f80762d5a9d3e43431e253bc2a24f2d9873a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55708 - 32488 "HINFO IN 517935307456418888.262216089366449874. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.115997151s
	
	
	==> describe nodes <==
	Name:               no-preload-480337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-480337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_52_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:52:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480337
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:53:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:53:05 +0000   Thu, 20 Nov 2025 20:52:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-480337
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                8c93adcf-5fa7-4496-af0d-3f9d0b8a36e2
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-74j8f                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-480337                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-rs8fb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-480337             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-480337    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-hq4z4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-480337             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-480337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-480337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-480337 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-480337 event: Registered Node no-preload-480337 in Controller
	  Normal  NodeReady                15s   kubelet          Node no-preload-480337 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ca79851f9ec6eba515f44a678a409b86f9ffaefc6ef521b75cbd706f58efe2d7] <==
	{"level":"warn","ts":"2025-11-20T20:52:31.887125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.895926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.902916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.909519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.915448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.921509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.927523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.933550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.940459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.950485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.956514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.962791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.968934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.975341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.981449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.988183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:31.995307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.001714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.009365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.016078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.022454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.039784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.045834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.052177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:52:32.099866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:53:08 up 35 min,  0 user,  load average: 2.82, 2.89, 1.91
	Linux no-preload-480337 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f01e7c9c1262c8403572b71719012200db3087935ddbf656824e897fb0c81e8] <==
	I1120 20:52:42.731878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:52:42.823948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 20:52:42.824122       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:52:42.824143       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:52:42.824172       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:52:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:52:43.026673       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:52:43.026723       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:52:43.026735       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:52:43.026884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:52:43.328272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:52:43.328299       1 metrics.go:72] Registering metrics
	I1120 20:52:43.328383       1 controller.go:711] "Syncing nftables rules"
	I1120 20:52:53.028518       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:52:53.028581       1 main.go:301] handling current node
	I1120 20:53:03.027871       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:53:03.027940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e708bbdcc75932b9a84fb152833406ce1f6a3925f9662f51b2dc05ee52ac3725] <==
	E1120 20:52:32.638176       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1120 20:52:32.666706       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:52:32.668907       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:52:32.668977       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:32.672490       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:32.672890       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:52:32.841768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:33.468830       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:52:33.472194       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:52:33.472212       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:52:33.877332       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:52:33.909700       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:52:33.974426       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:52:33.980142       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 20:52:33.981200       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:52:33.984897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:34.496119       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:52:35.215916       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:52:35.223529       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:52:35.230171       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:39.747536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:52:40.251971       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 20:52:40.551469       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:40.556329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 20:53:05.400833       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36130: use of closed network connection
	
	
	==> kube-controller-manager [eaf8540bacb9b6e74a3a5c6e34efd399f0587139bbb76e4aab8ec5549d16affc] <==
	I1120 20:52:39.460545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:39.460563       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:52:39.460585       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:52:39.495052       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:52:39.495118       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:52:39.495064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 20:52:39.495185       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:52:39.495249       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 20:52:39.495284       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:52:39.495653       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:52:39.495687       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 20:52:39.495694       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:52:39.495889       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:52:39.495990       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 20:52:39.496157       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:52:39.496357       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:52:39.496493       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:52:39.500716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:39.500725       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:52:39.503008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:39.506185       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:52:39.512427       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:52:39.516732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:39.517781       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 20:52:54.446988       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b1294ef176345ab99bdd9feed8402827c8d5f233d25dc8b19ada146c4b73e67a] <==
	I1120 20:52:40.957887       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:52:41.029140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:52:41.129350       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:52:41.129412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 20:52:41.129498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:52:41.153450       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:41.153515       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:52:41.159182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:52:41.159617       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:52:41.159652       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:41.162726       1 config.go:200] "Starting service config controller"
	I1120 20:52:41.163051       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:52:41.162763       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:52:41.163090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:52:41.162779       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:52:41.163105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:52:41.162866       1 config.go:309] "Starting node config controller"
	I1120 20:52:41.163123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:52:41.163129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:52:41.263240       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:52:41.263274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:52:41.263277       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e8633b508491c1ab67646d920e8865cc1ac787413de1f54f443bdaabb4fc7109] <==
	E1120 20:52:32.525409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:52:32.525432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:52:32.525502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:52:32.526396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:52:32.526632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:52:32.526595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:52:32.526518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:52:32.526658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:52:32.526689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:52:32.526855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:52:32.526938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:52:32.526985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:52:32.527034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:52:32.527158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:52:32.527203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:52:33.392701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:52:33.395980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:52:33.433580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:52:33.443657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:52:33.524455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:52:33.554587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:52:33.566617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:52:33.628877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:52:33.704970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1120 20:52:35.522172       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.099223    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-480337" podStartSLOduration=1.0992013 podStartE2EDuration="1.0992013s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.09900424 +0000 UTC m=+1.138303166" watchObservedRunningTime="2025-11-20 20:52:36.0992013 +0000 UTC m=+1.138500227"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.118809    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-480337" podStartSLOduration=1.118791258 podStartE2EDuration="1.118791258s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.108353204 +0000 UTC m=+1.147652130" watchObservedRunningTime="2025-11-20 20:52:36.118791258 +0000 UTC m=+1.158090185"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.130539    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-480337" podStartSLOduration=2.130517344 podStartE2EDuration="2.130517344s" podCreationTimestamp="2025-11-20 20:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.118952783 +0000 UTC m=+1.158251715" watchObservedRunningTime="2025-11-20 20:52:36.130517344 +0000 UTC m=+1.169816271"
	Nov 20 20:52:36 no-preload-480337 kubelet[2186]: I1120 20:52:36.143423    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-480337" podStartSLOduration=1.14335727 podStartE2EDuration="1.14335727s" podCreationTimestamp="2025-11-20 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:36.130984248 +0000 UTC m=+1.170283175" watchObservedRunningTime="2025-11-20 20:52:36.14335727 +0000 UTC m=+1.182656197"
	Nov 20 20:52:39 no-preload-480337 kubelet[2186]: I1120 20:52:39.485345    2186 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:52:39 no-preload-480337 kubelet[2186]: I1120 20:52:39.486055    2186 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360653    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da75f7f4-afa3-4976-98d9-0574067ce59e-lib-modules\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360712    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-cni-cfg\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360740    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-xtables-lock\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360767    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53878683-6bf6-4530-ae0c-4d20a240db94-lib-modules\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360793    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfr5f\" (UniqueName: \"kubernetes.io/projected/53878683-6bf6-4530-ae0c-4d20a240db94-kube-api-access-jfr5f\") pod \"kindnet-rs8fb\" (UID: \"53878683-6bf6-4530-ae0c-4d20a240db94\") " pod="kube-system/kindnet-rs8fb"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360821    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hntx\" (UniqueName: \"kubernetes.io/projected/da75f7f4-afa3-4976-98d9-0574067ce59e-kube-api-access-6hntx\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360844    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da75f7f4-afa3-4976-98d9-0574067ce59e-kube-proxy\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:40 no-preload-480337 kubelet[2186]: I1120 20:52:40.360867    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da75f7f4-afa3-4976-98d9-0574067ce59e-xtables-lock\") pod \"kube-proxy-hq4z4\" (UID: \"da75f7f4-afa3-4976-98d9-0574067ce59e\") " pod="kube-system/kube-proxy-hq4z4"
	Nov 20 20:52:43 no-preload-480337 kubelet[2186]: I1120 20:52:43.102760    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hq4z4" podStartSLOduration=3.102738361 podStartE2EDuration="3.102738361s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:41.096718694 +0000 UTC m=+6.136017620" watchObservedRunningTime="2025-11-20 20:52:43.102738361 +0000 UTC m=+8.142037289"
	Nov 20 20:52:48 no-preload-480337 kubelet[2186]: I1120 20:52:48.532944    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rs8fb" podStartSLOduration=7.104820189 podStartE2EDuration="8.532925934s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="2025-11-20 20:52:41.046946138 +0000 UTC m=+6.086245051" lastFinishedPulling="2025-11-20 20:52:42.475051876 +0000 UTC m=+7.514350796" observedRunningTime="2025-11-20 20:52:43.102919502 +0000 UTC m=+8.142218424" watchObservedRunningTime="2025-11-20 20:52:48.532925934 +0000 UTC m=+13.572224861"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.277594    2186 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355149    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c1e8b5fa-dff6-41a8-afd9-c996cfda92a4-tmp\") pod \"storage-provisioner\" (UID: \"c1e8b5fa-dff6-41a8-afd9-c996cfda92a4\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355194    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qjj\" (UniqueName: \"kubernetes.io/projected/c1e8b5fa-dff6-41a8-afd9-c996cfda92a4-kube-api-access-l6qjj\") pod \"storage-provisioner\" (UID: \"c1e8b5fa-dff6-41a8-afd9-c996cfda92a4\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355211    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bkvh\" (UniqueName: \"kubernetes.io/projected/a8f2defc-f970-4247-9872-a87af62a388d-kube-api-access-7bkvh\") pod \"coredns-66bc5c9577-74j8f\" (UID: \"a8f2defc-f970-4247-9872-a87af62a388d\") " pod="kube-system/coredns-66bc5c9577-74j8f"
	Nov 20 20:52:53 no-preload-480337 kubelet[2186]: I1120 20:52:53.355227    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8f2defc-f970-4247-9872-a87af62a388d-config-volume\") pod \"coredns-66bc5c9577-74j8f\" (UID: \"a8f2defc-f970-4247-9872-a87af62a388d\") " pod="kube-system/coredns-66bc5c9577-74j8f"
	Nov 20 20:52:54 no-preload-480337 kubelet[2186]: I1120 20:52:54.139887    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-74j8f" podStartSLOduration=14.139827486 podStartE2EDuration="14.139827486s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:54.12743428 +0000 UTC m=+19.166733207" watchObservedRunningTime="2025-11-20 20:52:54.139827486 +0000 UTC m=+19.179126410"
	Nov 20 20:52:54 no-preload-480337 kubelet[2186]: I1120 20:52:54.151000    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.150985801 podStartE2EDuration="14.150985801s" podCreationTimestamp="2025-11-20 20:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:54.150510637 +0000 UTC m=+19.189809564" watchObservedRunningTime="2025-11-20 20:52:54.150985801 +0000 UTC m=+19.190284726"
	Nov 20 20:52:56 no-preload-480337 kubelet[2186]: I1120 20:52:56.370874    2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42zk\" (UniqueName: \"kubernetes.io/projected/73993c68-9404-4e81-9899-9c821e232fe0-kube-api-access-p42zk\") pod \"busybox\" (UID: \"73993c68-9404-4e81-9899-9c821e232fe0\") " pod="default/busybox"
	Nov 20 20:52:59 no-preload-480337 kubelet[2186]: I1120 20:52:59.138726    2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.715301712 podStartE2EDuration="3.138697429s" podCreationTimestamp="2025-11-20 20:52:56 +0000 UTC" firstStartedPulling="2025-11-20 20:52:56.696395638 +0000 UTC m=+21.735694549" lastFinishedPulling="2025-11-20 20:52:58.11979136 +0000 UTC m=+23.159090266" observedRunningTime="2025-11-20 20:52:59.138546404 +0000 UTC m=+24.177845334" watchObservedRunningTime="2025-11-20 20:52:59.138697429 +0000 UTC m=+24.177996355"
	
	
	==> storage-provisioner [4df157c5b9a1bbabf656e7904180d7e8e7306ea3460d3757dad98c182ce4798e] <==
	I1120 20:52:53.793797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:52:53.801186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:52:53.801244       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:52:53.803463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:53.808939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:52:53.809251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:52:53.809361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb50d925-0e04-4d42-8ce6-2e966809ec0a", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c became leader
	I1120 20:52:53.809475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c!
	W1120 20:52:53.811890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:53.815004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:52:53.910554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-480337_0ce61194-19e1-4c1f-9032-28d707dcb80c!
	W1120 20:52:55.818464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:55.823164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:57.826649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:57.831142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:59.833907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:52:59.838818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:01.842224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:01.846185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:03.849548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:03.853515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:05.855809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:05.860241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:07.864124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:53:07.868720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480337 -n no-preload-480337
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-480337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (12.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-954820 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c1920ad7-2d95-4409-be9d-031c42380cd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c1920ad7-2d95-4409-be9d-031c42380cd6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004647879s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-954820 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-954820
helpers_test.go:243: (dbg) docker inspect embed-certs-954820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189",
	        "Created": "2025-11-20T20:54:17.402845892Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:54:17.445172033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/hosts",
	        "LogPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189-json.log",
	        "Name": "/embed-certs-954820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-954820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-954820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189",
	                "LowerDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-954820",
	                "Source": "/var/lib/docker/volumes/embed-certs-954820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-954820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-954820",
	                "name.minikube.sigs.k8s.io": "embed-certs-954820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "58172182d3472f13e573b2487006c7e39072901a0456bf781944c7d957a6fcdf",
	            "SandboxKey": "/var/run/docker/netns/58172182d347",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-954820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef2a95d8f5448f1f3bd6bffb0f08e5b56243969aa4a812553444ef9c9270c6b4",
	                    "EndpointID": "9439522a393e93d637703bb4d382a8fe106e99800a24486f90f9cd5ac58c0da7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:b7:28:94:45:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-954820",
	                        "2e456685b6f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-954820 -n embed-certs-954820
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-954820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-954820 logs -n 25: (1.065296542s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-480337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ stop    │ -p no-preload-480337 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p no-preload-480337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ old-k8s-version-715005 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ no-preload-480337 image list --format=json                                                                                                                                                                                                          │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p disable-driver-mounts-311936                                                                                                                                                                                                                     │ disable-driver-mounts-311936 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-053182 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │                     │
	│ delete  │ -p cert-expiration-137718                                                                                                                                                                                                                           │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-439796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ stop    │ -p newest-cni-439796 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-439796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:54:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:54:59.857828  278240 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:54:59.858105  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858115  278240 out.go:374] Setting ErrFile to fd 2...
	I1120 20:54:59.858119  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858349  278240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:54:59.858826  278240 out.go:368] Setting JSON to false
	I1120 20:54:59.860194  278240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1763669848,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:54:59.860277  278240 start.go:143] virtualization: kvm guest
	I1120 20:54:59.862251  278240 out.go:179] * [newest-cni-439796] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:54:59.863664  278240 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:54:59.863667  278240 notify.go:221] Checking for updates...
	I1120 20:54:59.864889  278240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:54:59.866102  278240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:54:59.867392  278240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:54:59.868550  278240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:54:59.869682  278240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:54:59.871457  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:54:59.871972  278240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:54:59.895937  278240 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:54:59.896024  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:54:59.953310  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:54:59.943244297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:54:59.953450  278240 docker.go:319] overlay module found
	I1120 20:54:59.955196  278240 out.go:179] * Using the docker driver based on existing profile
	I1120 20:54:59.956312  278240 start.go:309] selected driver: docker
	I1120 20:54:59.956329  278240 start.go:930] validating driver "docker" against &{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:54:59.956444  278240 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:54:59.956970  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:55:00.019097  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:55:00.008082303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:55:00.019426  278240 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:00.019462  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:00.019528  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:00.019596  278240 start.go:353] cluster config:
	{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:00.021170  278240 out.go:179] * Starting "newest-cni-439796" primary control-plane node in "newest-cni-439796" cluster
	I1120 20:55:00.022241  278240 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:55:00.023448  278240 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:55:00.024648  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:00.024678  278240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1120 20:55:00.024688  278240 cache.go:65] Caching tarball of preloaded images
	I1120 20:55:00.024751  278240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:55:00.024781  278240 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:55:00.024793  278240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 20:55:00.024892  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.047349  278240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:55:00.047385  278240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:55:00.047421  278240 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:55:00.047453  278240 start.go:360] acquireMachinesLock for newest-cni-439796: {Name:mkd377b5021ac8b488b2c648334cf58462a4dda8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:55:00.047519  278240 start.go:364] duration metric: took 41.671µs to acquireMachinesLock for "newest-cni-439796"
	I1120 20:55:00.047542  278240 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:55:00.047552  278240 fix.go:54] fixHost starting: 
	I1120 20:55:00.047793  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.066752  278240 fix.go:112] recreateIfNeeded on newest-cni-439796: state=Stopped err=<nil>
	W1120 20:55:00.066782  278240 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 20:54:59.168958  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	W1120 20:55:01.169101  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	I1120 20:55:01.669497  267938 node_ready.go:49] node "default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:01.669530  267938 node_ready.go:38] duration metric: took 11.503696878s for node "default-k8s-diff-port-053182" to be "Ready" ...
	I1120 20:55:01.669547  267938 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:01.669608  267938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:01.684444  267938 api_server.go:72] duration metric: took 11.853641818s to wait for apiserver process to appear ...
	I1120 20:55:01.684479  267938 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:01.684517  267938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1120 20:55:01.690782  267938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1120 20:55:01.691893  267938 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:01.691922  267938 api_server.go:131] duration metric: took 7.434681ms to wait for apiserver health ...
	I1120 20:55:01.691934  267938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:01.695775  267938 system_pods.go:59] 8 kube-system pods found
	I1120 20:55:01.695832  267938 system_pods.go:61] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.695845  267938 system_pods.go:61] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.695858  267938 system_pods.go:61] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.695873  267938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.695882  267938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.695888  267938 system_pods.go:61] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.695897  267938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.695905  267938 system_pods.go:61] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.695917  267938 system_pods.go:74] duration metric: took 3.975656ms to wait for pod list to return data ...
	I1120 20:55:01.695931  267938 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:01.702110  267938 default_sa.go:45] found service account: "default"
	I1120 20:55:01.702135  267938 default_sa.go:55] duration metric: took 6.196385ms for default service account to be created ...
	I1120 20:55:01.702146  267938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:55:01.796537  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:01.796576  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.796585  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.796599  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.796605  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.796610  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.796621  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.796626  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.796634  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.796670  267938 retry.go:31] will retry after 230.554359ms: missing components: kube-dns
	I1120 20:55:02.032424  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.032457  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:02.032465  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.032474  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.032479  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.032484  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.032489  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.032493  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.032500  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:02.032519  267938 retry.go:31] will retry after 327.025815ms: missing components: kube-dns
	I1120 20:55:02.365222  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.365305  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Running
	I1120 20:55:02.365316  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.365326  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.365334  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.365351  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.365357  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.365363  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.365394  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Running
	I1120 20:55:02.365405  267938 system_pods.go:126] duration metric: took 663.251244ms to wait for k8s-apps to be running ...
	I1120 20:55:02.365435  267938 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:55:02.365836  267938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:55:02.379259  267938 system_svc.go:56] duration metric: took 13.837433ms WaitForService to wait for kubelet
	I1120 20:55:02.379293  267938 kubeadm.go:587] duration metric: took 12.548497918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:55:02.379319  267938 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:02.382189  267938 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:02.382219  267938 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:02.382231  267938 node_conditions.go:105] duration metric: took 2.905948ms to run NodePressure ...
	I1120 20:55:02.382244  267938 start.go:242] waiting for startup goroutines ...
	I1120 20:55:02.382254  267938 start.go:247] waiting for cluster config update ...
	I1120 20:55:02.382269  267938 start.go:256] writing updated cluster config ...
	I1120 20:55:02.382592  267938 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:02.386235  267938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:02.389651  267938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.393726  267938 pod_ready.go:94] pod "coredns-66bc5c9577-m5kfb" is "Ready"
	I1120 20:55:02.393745  267938 pod_ready.go:86] duration metric: took 4.074153ms for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.395689  267938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.399316  267938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.399335  267938 pod_ready.go:86] duration metric: took 3.628858ms for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.401248  267938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.404743  267938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.404759  267938 pod_ready.go:86] duration metric: took 3.496456ms for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.406414  267938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.790539  267938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.790573  267938 pod_ready.go:86] duration metric: took 384.138389ms for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.990773  267938 pod_ready.go:83] waiting for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.390942  267938 pod_ready.go:94] pod "kube-proxy-9dwtf" is "Ready"
	I1120 20:55:03.390966  267938 pod_ready.go:86] duration metric: took 400.162298ms for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d3441ee2499da       56cc512116c8f       6 seconds ago       Running             busybox                   0                   6281561cd9d13       busybox                                      default
	9aadc48bf3961       52546a367cc9e       11 seconds ago      Running             coredns                   0                   b2bf7b2d8cf4c       coredns-66bc5c9577-x7zhn                     kube-system
	6e397d99719e8       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   b52812d341c06       storage-provisioner                          kube-system
	b3b1249873abc       409467f978b4a       22 seconds ago      Running             kindnet-cni               0                   13d37228a371a       kindnet-2hlth                                kube-system
	63121e4af9723       fc25172553d79       22 seconds ago      Running             kube-proxy                0                   5e5b66e111e34       kube-proxy-72rnp                             kube-system
	5efb0a99caac2       c80c8dbafe7dd       34 seconds ago      Running             kube-controller-manager   0                   b0e7764dfffcd       kube-controller-manager-embed-certs-954820   kube-system
	683dc01ab1049       7dd6aaa1717ab       34 seconds ago      Running             kube-scheduler            0                   98c6eb0af96be       kube-scheduler-embed-certs-954820            kube-system
	ecb7ea8c22d19       5f1f5298c888d       34 seconds ago      Running             etcd                      0                   1d2184c6b581d       etcd-embed-certs-954820                      kube-system
	701bc97da45da       c3994bc696102       34 seconds ago      Running             kube-apiserver            0                   5749b9fa3c2f8       kube-apiserver-embed-certs-954820            kube-system
	
	
	==> containerd <==
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.909722408Z" level=info msg="CreateContainer within sandbox \"b52812d341c067a9f2a2c0c902b0f0560c86f174f65b3d840cb58a0337581c65\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.910799455Z" level=info msg="StartContainer for \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.911690129Z" level=info msg="connecting to shim 6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1" address="unix:///run/containerd/s/0dd8ef32bd568491c3581c978a6c6c7442fd8f675f0f8f9c937557d5224671dc" protocol=ttrpc version=3
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.913398654Z" level=info msg="Container 9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.919488296Z" level=info msg="CreateContainer within sandbox \"b2bf7b2d8cf4ccd0c89e1735e79d5dfb3f28bb95bba6d34ba28a4acefe760c6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.920157624Z" level=info msg="StartContainer for \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.921197058Z" level=info msg="connecting to shim 9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784" address="unix:///run/containerd/s/c0ea3e1f21d52782fe126467faec3b36dfefcec2df93781e2633c92f90bdf11a" protocol=ttrpc version=3
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.963720184Z" level=info msg="StartContainer for \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\" returns successfully"
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.971510367Z" level=info msg="StartContainer for \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\" returns successfully"
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.477244729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c1920ad7-2d95-4409-be9d-031c42380cd6,Namespace:default,Attempt:0,}"
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.516728969Z" level=info msg="connecting to shim 6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165" address="unix:///run/containerd/s/1a618de0669581369747a58490a394aaf623b9fd1470a4ce67e6405fae4199a6" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.584777884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c1920ad7-2d95-4409-be9d-031c42380cd6,Namespace:default,Attempt:0,} returns sandbox id \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\""
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.587109491Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.235816622Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.236670481Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.237898816Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.239765592Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.240194452Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.653048294s"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.240233567Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.246970289Z" level=info msg="CreateContainer within sandbox \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.253552627Z" level=info msg="Container d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.262961093Z" level=info msg="CreateContainer within sandbox \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.263688911Z" level=info msg="StartContainer for \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.264725431Z" level=info msg="connecting to shim d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd" address="unix:///run/containerd/s/1a618de0669581369747a58490a394aaf623b9fd1470a4ce67e6405fae4199a6" protocol=ttrpc version=3
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.314647182Z" level=info msg="StartContainer for \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\" returns successfully"
	
	
	==> coredns [9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44570 - 62339 "HINFO IN 5223488822803614498.4173452436444296566. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051798526s
	
	
	==> describe nodes <==
	Name:               embed-certs-954820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-954820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-954820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_54_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:54:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-954820
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-954820
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                0ccc10b4-9e1e-496b-be58-89da7f82552b
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-x7zhn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-954820                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-2hlth                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-954820             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-embed-certs-954820    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-72rnp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-954820             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-954820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-954820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node embed-certs-954820 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node embed-certs-954820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node embed-certs-954820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node embed-certs-954820 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-954820 event: Registered Node embed-certs-954820 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-954820 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ecb7ea8c22d19c62a975927d40533449c3000063ebed8bf1f3946e15a961f8f5] <==
	{"level":"warn","ts":"2025-11-20T20:54:33.655999Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.265402Z","time spent":"390.55134ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-system-service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/kube-system-service-accounts\" value_size:634 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T20:54:33.656065Z","caller":"traceutil/trace.go:172","msg":"trace[421027544] transaction","detail":"{read_only:false; response_revision:62; number_of_response:1; }","duration":"389.473516ms","start":"2025-11-20T20:54:33.266580Z","end":"2025-11-20T20:54:33.656054Z","steps":["trace[421027544] 'process raft request'  (duration: 389.23367ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:33.656104Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266831Z","time spent":"389.183098ms","remote":"127.0.0.1:57110","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":464,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T20:54:33.656134Z","caller":"traceutil/trace.go:172","msg":"trace[393182147] transaction","detail":"{read_only:false; response_revision:63; number_of_response:1; }","duration":"389.354543ms","start":"2025-11-20T20:54:33.266771Z","end":"2025-11-20T20:54:33.656126Z","steps":["trace[393182147] 'process raft request'  (duration: 389.106273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:33.656303Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266565Z","time spent":"389.521731ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-controller-manager\" mod_revision:55 > success:<request_put:<key:\"/registry/flowschemas/kube-controller-manager\" value_size:1048 >> failure:<request_range:<key:\"/registry/flowschemas/kube-controller-manager\" > >"}
	{"level":"warn","ts":"2025-11-20T20:54:33.656441Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266732Z","time spent":"389.416788ms","remote":"127.0.0.1:56538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":712,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" mod_revision:49 > success:<request_put:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" value_size:634 lease:499225158781226421 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" > >"}
	{"level":"info","ts":"2025-11-20T20:54:33.872888Z","caller":"traceutil/trace.go:172","msg":"trace[1568779350] linearizableReadLoop","detail":"{readStateIndex:70; appliedIndex:70; }","duration":"140.726728ms","start":"2025-11-20T20:54:33.732139Z","end":"2025-11-20T20:54:33.872865Z","steps":["trace[1568779350] 'read index received'  (duration: 140.710723ms)","trace[1568779350] 'applied index is now lower than readState.Index'  (duration: 6.178µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.983721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.560052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:33.983804Z","caller":"traceutil/trace.go:172","msg":"trace[879466238] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:65; }","duration":"251.645213ms","start":"2025-11-20T20:54:33.732131Z","end":"2025-11-20T20:54:33.983776Z","steps":["trace[879466238] 'agreement among raft nodes before linearized reading'  (duration: 140.809251ms)","trace[879466238] 'range keys from in-memory index tree'  (duration: 110.702028ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.983840Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.904659ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597195636002270 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/service-accounts\" value_size:615 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-20T20:54:33.983990Z","caller":"traceutil/trace.go:172","msg":"trace[1173711927] transaction","detail":"{read_only:false; response_revision:67; number_of_response:1; }","duration":"322.851107ms","start":"2025-11-20T20:54:33.661126Z","end":"2025-11-20T20:54:33.983978Z","steps":["trace[1173711927] 'process raft request'  (duration: 322.78296ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:33.984011Z","caller":"traceutil/trace.go:172","msg":"trace[1971353058] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"323.496231ms","start":"2025-11-20T20:54:33.660491Z","end":"2025-11-20T20:54:33.983987Z","steps":["trace[1971353058] 'process raft request'  (duration: 212.394278ms)","trace[1971353058] 'compare'  (duration: 110.798457ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.984094Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.660476Z","time spent":"323.576413ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":661,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/service-accounts\" value_size:615 >> failure:<>"}
	{"level":"warn","ts":"2025-11-20T20:54:33.984125Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.661112Z","time spent":"322.934334ms","remote":"127.0.0.1:56538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":708,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" mod_revision:51 > success:<request_put:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" value_size:630 lease:499225158781226421 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" > >"}
	{"level":"info","ts":"2025-11-20T20:54:33.984721Z","caller":"traceutil/trace.go:172","msg":"trace[1382470456] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"251.064082ms","start":"2025-11-20T20:54:33.733648Z","end":"2025-11-20T20:54:33.984712Z","steps":["trace[1382470456] 'process raft request'  (duration: 250.989694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:34.091705Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.660425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:34.091771Z","caller":"traceutil/trace.go:172","msg":"trace[712806050] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:0; response_revision:69; }","duration":"103.745711ms","start":"2025-11-20T20:54:33.988012Z","end":"2025-11-20T20:54:34.091757Z","steps":["trace[712806050] 'agreement among raft nodes before linearized reading'  (duration: 99.649158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:34.091897Z","caller":"traceutil/trace.go:172","msg":"trace[1320144489] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"103.4332ms","start":"2025-11-20T20:54:33.988450Z","end":"2025-11-20T20:54:34.091883Z","steps":["trace[1320144489] 'process raft request'  (duration: 99.278006ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:34.220339Z","caller":"traceutil/trace.go:172","msg":"trace[872629498] linearizableReadLoop","detail":"{readStateIndex:77; appliedIndex:77; }","duration":"124.786666ms","start":"2025-11-20T20:54:34.095519Z","end":"2025-11-20T20:54:34.220306Z","steps":["trace[872629498] 'read index received'  (duration: 124.779173ms)","trace[872629498] 'applied index is now lower than readState.Index'  (duration: 6.506µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:34.311024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.483473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/cluster-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:34.311081Z","caller":"traceutil/trace.go:172","msg":"trace[1168485939] range","detail":"{range_begin:/registry/clusterroles/cluster-admin; range_end:; response_count:0; response_revision:72; }","duration":"215.552305ms","start":"2025-11-20T20:54:34.095515Z","end":"2025-11-20T20:54:34.311067Z","steps":["trace[1168485939] 'agreement among raft nodes before linearized reading'  (duration: 124.879214ms)","trace[1168485939] 'range keys from in-memory index tree'  (duration: 90.57366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:54:34.311236Z","caller":"traceutil/trace.go:172","msg":"trace[1296496767] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"216.147442ms","start":"2025-11-20T20:54:34.095066Z","end":"2025-11-20T20:54:34.311213Z","steps":["trace[1296496767] 'process raft request'  (duration: 125.290959ms)","trace[1296496767] 'compare'  (duration: 90.641689ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:54:34.311276Z","caller":"traceutil/trace.go:172","msg":"trace[1931010025] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"215.098536ms","start":"2025-11-20T20:54:34.096165Z","end":"2025-11-20T20:54:34.311263Z","steps":["trace[1931010025] 'process raft request'  (duration: 215.025132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:34.311465Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.241207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-11-20T20:54:34.311807Z","caller":"traceutil/trace.go:172","msg":"trace[862884614] range","detail":"{range_begin:/registry/events/default/embed-certs-954820.1879d2671b5869cd; range_end:; response_count:1; response_revision:74; }","duration":"215.574884ms","start":"2025-11-20T20:54:34.096206Z","end":"2025-11-20T20:54:34.311781Z","steps":["trace[862884614] 'agreement among raft nodes before linearized reading'  (duration: 215.134379ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:55:04 up 37 min,  0 user,  load average: 2.92, 2.85, 2.01
	Linux embed-certs-954820 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3b1249873abc14f954f4d62e1593f12aec504feca9af1318cca0a9faa273bea] <==
	I1120 20:54:42.250570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:54:42.250844       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 20:54:42.250983       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:54:42.251000       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:54:42.251013       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:54:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:54:42.450873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:54:42.450934       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:54:42.450953       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:54:42.550820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:54:42.850768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:54:42.850837       1 metrics.go:72] Registering metrics
	I1120 20:54:42.850925       1 controller.go:711] "Syncing nftables rules"
	I1120 20:54:52.451636       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:54:52.451720       1 main.go:301] handling current node
	I1120 20:55:02.452952       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:55:02.453003       1 main.go:301] handling current node
	
	
	==> kube-apiserver [701bc97da45dafe69924b7d0298663b307e0de8bce555758070a8aaab74b7b28] <==
	I1120 20:54:32.169681       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:54:32.169954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 20:54:32.300573       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:54:32.300927       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:32.302502       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:54:32.396542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:32.398747       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:54:33.263847       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:54:33.657319       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:54:33.657347       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:54:34.877920       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:54:34.923111       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:54:35.067806       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:54:35.078987       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 20:54:35.080439       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:54:35.086563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:54:35.114475       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:54:36.169550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:54:36.178406       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:54:36.186005       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:54:40.417944       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:40.424653       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:40.867543       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:54:41.113895       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 20:55:03.279617       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54524: use of closed network connection
	
	
	==> kube-controller-manager [5efb0a99caac24797372cd4ce9ed52e65067ca32732e38425559b88fefc42127] <==
	I1120 20:54:40.110484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 20:54:40.110686       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:54:40.110694       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 20:54:40.110724       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:54:40.110825       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 20:54:40.111095       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:54:40.111112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 20:54:40.111247       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 20:54:40.111316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 20:54:40.112048       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 20:54:40.115326       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:54:40.115447       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 20:54:40.115490       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 20:54:40.115528       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 20:54:40.115535       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 20:54:40.115542       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 20:54:40.119893       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:54:40.119929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:40.121161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:40.122964       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-954820" podCIDRs=["10.244.0.0/24"]
	I1120 20:54:40.128032       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:54:40.136430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 20:54:40.136721       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:54:40.159953       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:54:55.063154       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [63121e4af97231bbe9817cb5d8f21daa3e6e2de9fd7bc9742aa8901ad2361c5d] <==
	I1120 20:54:41.870745       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:54:41.940522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:54:42.040822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:54:42.040862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 20:54:42.040978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:54:42.066003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:54:42.066062       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:54:42.073247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:54:42.073657       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:54:42.073760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:54:42.075578       1 config.go:309] "Starting node config controller"
	I1120 20:54:42.075669       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:54:42.075691       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:54:42.075725       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:54:42.075748       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:54:42.075753       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:54:42.075699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:54:42.075679       1 config.go:200] "Starting service config controller"
	I1120 20:54:42.075994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:54:42.176571       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:54:42.176600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:54:42.176588       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [683dc01ab10495c1b23b1e9c040e2c5ee29653a0c6b195c45f5b4e9618ef8227] <==
	E1120 20:54:32.626330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:32.626527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:32.626565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:32.626644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:32.626717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:33.453735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:33.514350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:33.568045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:54:33.719457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:54:33.778994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:54:33.859626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:54:33.877065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:54:33.885469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:33.905995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:33.911253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:33.930845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:34.074623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:34.128314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:34.145778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:54:34.180247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:54:34.185646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:34.196008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:54:34.209391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:34.209511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1120 20:54:36.022656       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.081964    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-954820" podStartSLOduration=2.081939096 podStartE2EDuration="2.081939096s" podCreationTimestamp="2025-11-20 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.071390641 +0000 UTC m=+1.142814919" watchObservedRunningTime="2025-11-20 20:54:37.081939096 +0000 UTC m=+1.153363368"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.095236    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-954820" podStartSLOduration=1.095214286 podStartE2EDuration="1.095214286s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.082168744 +0000 UTC m=+1.153593025" watchObservedRunningTime="2025-11-20 20:54:37.095214286 +0000 UTC m=+1.166638567"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.095430    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-954820" podStartSLOduration=1.095419442 podStartE2EDuration="1.095419442s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.094754286 +0000 UTC m=+1.166178569" watchObservedRunningTime="2025-11-20 20:54:37.095419442 +0000 UTC m=+1.166843722"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.110305    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-954820" podStartSLOduration=1.110258381 podStartE2EDuration="1.110258381s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.110028883 +0000 UTC m=+1.181453164" watchObservedRunningTime="2025-11-20 20:54:37.110258381 +0000 UTC m=+1.181682662"
	Nov 20 20:54:40 embed-certs-954820 kubelet[1447]: I1120 20:54:40.169620    1447 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:54:40 embed-certs-954820 kubelet[1447]: I1120 20:54:40.170257    1447 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232250    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6ff\" (UniqueName: \"kubernetes.io/projected/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-kube-api-access-6q6ff\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232301    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-xtables-lock\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232333    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-lib-modules\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232467    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-kube-proxy\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232522    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-xtables-lock\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232548    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctng\" (UniqueName: \"kubernetes.io/projected/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-kube-api-access-9ctng\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232572    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-lib-modules\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232594    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-cni-cfg\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:42 embed-certs-954820 kubelet[1447]: I1120 20:54:42.068136    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72rnp" podStartSLOduration=1.068118226 podStartE2EDuration="1.068118226s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:42.067919474 +0000 UTC m=+6.139343756" watchObservedRunningTime="2025-11-20 20:54:42.068118226 +0000 UTC m=+6.139542510"
	Nov 20 20:54:42 embed-certs-954820 kubelet[1447]: I1120 20:54:42.081691    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2hlth" podStartSLOduration=1.081670062 podStartE2EDuration="1.081670062s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:42.081528883 +0000 UTC m=+6.152953170" watchObservedRunningTime="2025-11-20 20:54:42.081670062 +0000 UTC m=+6.153094344"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.464035    1447 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.505965    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c10e173a-b2ad-4834-be87-fe5f82ee3e43-config-volume\") pod \"coredns-66bc5c9577-x7zhn\" (UID: \"c10e173a-b2ad-4834-be87-fe5f82ee3e43\") " pod="kube-system/coredns-66bc5c9577-x7zhn"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506005    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff-tmp\") pod \"storage-provisioner\" (UID: \"96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff\") " pod="kube-system/storage-provisioner"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506020    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25v2\" (UniqueName: \"kubernetes.io/projected/96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff-kube-api-access-h25v2\") pod \"storage-provisioner\" (UID: \"96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff\") " pod="kube-system/storage-provisioner"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506047    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgswv\" (UniqueName: \"kubernetes.io/projected/c10e173a-b2ad-4834-be87-fe5f82ee3e43-kube-api-access-jgswv\") pod \"coredns-66bc5c9577-x7zhn\" (UID: \"c10e173a-b2ad-4834-be87-fe5f82ee3e43\") " pod="kube-system/coredns-66bc5c9577-x7zhn"
	Nov 20 20:54:53 embed-certs-954820 kubelet[1447]: I1120 20:54:53.098328    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x7zhn" podStartSLOduration=12.098307413 podStartE2EDuration="12.098307413s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:53.098064317 +0000 UTC m=+17.169488598" watchObservedRunningTime="2025-11-20 20:54:53.098307413 +0000 UTC m=+17.169731694"
	Nov 20 20:54:53 embed-certs-954820 kubelet[1447]: I1120 20:54:53.123672    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.123649616 podStartE2EDuration="12.123649616s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:53.123425154 +0000 UTC m=+17.194849435" watchObservedRunningTime="2025-11-20 20:54:53.123649616 +0000 UTC m=+17.195073896"
	Nov 20 20:54:55 embed-certs-954820 kubelet[1447]: I1120 20:54:55.220629    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn4w\" (UniqueName: \"kubernetes.io/projected/c1920ad7-2d95-4409-be9d-031c42380cd6-kube-api-access-dcn4w\") pod \"busybox\" (UID: \"c1920ad7-2d95-4409-be9d-031c42380cd6\") " pod="default/busybox"
	Nov 20 20:54:58 embed-certs-954820 kubelet[1447]: I1120 20:54:58.113049    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.458328498 podStartE2EDuration="3.113032987s" podCreationTimestamp="2025-11-20 20:54:55 +0000 UTC" firstStartedPulling="2025-11-20 20:54:55.586543568 +0000 UTC m=+19.657967851" lastFinishedPulling="2025-11-20 20:54:57.24124808 +0000 UTC m=+21.312672340" observedRunningTime="2025-11-20 20:54:58.112606632 +0000 UTC m=+22.184030915" watchObservedRunningTime="2025-11-20 20:54:58.113032987 +0000 UTC m=+22.184457267"
	
	
	==> storage-provisioner [6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1] <==
	I1120 20:54:52.971091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:54:52.979143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:54:52.979195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:54:52.981637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:52.986154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:54:52.986383       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:54:52.986546       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e8b54b-ab56-43aa-b79e-b9127295cd5b", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f became leader
	I1120 20:54:52.986591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f!
	W1120 20:54:52.989596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:52.994171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:54:53.087220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f!
	W1120 20:54:54.997913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:55.002721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:57.005861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:57.010386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:59.014309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:59.018894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.021950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.027190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.030593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.034339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-954820 -n embed-certs-954820
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-954820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-954820
helpers_test.go:243: (dbg) docker inspect embed-certs-954820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189",
	        "Created": "2025-11-20T20:54:17.402845892Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:54:17.445172033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/hosts",
	        "LogPath": "/var/lib/docker/containers/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189/2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189-json.log",
	        "Name": "/embed-certs-954820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-954820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-954820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e456685b6f2524aa0eb2dbb0844c721c48386b2d98a7097d8e5af74f5f8b189",
	                "LowerDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9d6896e2a0d453970469be59c51b81392c7fe44e94394cf83a9e893efbcff14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-954820",
	                "Source": "/var/lib/docker/volumes/embed-certs-954820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-954820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-954820",
	                "name.minikube.sigs.k8s.io": "embed-certs-954820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "58172182d3472f13e573b2487006c7e39072901a0456bf781944c7d957a6fcdf",
	            "SandboxKey": "/var/run/docker/netns/58172182d347",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-954820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef2a95d8f5448f1f3bd6bffb0f08e5b56243969aa4a812553444ef9c9270c6b4",
	                    "EndpointID": "9439522a393e93d637703bb4d382a8fe106e99800a24486f90f9cd5ac58c0da7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8e:b7:28:94:45:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-954820",
	                        "2e456685b6f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-954820 -n embed-certs-954820
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-954820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-954820 logs -n 25: (1.126196921s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-480337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ stop    │ -p no-preload-480337 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p no-preload-480337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:53 UTC │
	│ start   │ -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:53 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ old-k8s-version-715005 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ no-preload-480337 image list --format=json                                                                                                                                                                                                          │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p disable-driver-mounts-311936                                                                                                                                                                                                                     │ disable-driver-mounts-311936 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-053182 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:55 UTC │
	│ delete  │ -p cert-expiration-137718                                                                                                                                                                                                                           │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-439796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ stop    │ -p newest-cni-439796 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-439796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:54:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:54:59.857828  278240 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:54:59.858105  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858115  278240 out.go:374] Setting ErrFile to fd 2...
	I1120 20:54:59.858119  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858349  278240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:54:59.858826  278240 out.go:368] Setting JSON to false
	I1120 20:54:59.860194  278240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1763669848,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:54:59.860277  278240 start.go:143] virtualization: kvm guest
	I1120 20:54:59.862251  278240 out.go:179] * [newest-cni-439796] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:54:59.863664  278240 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:54:59.863667  278240 notify.go:221] Checking for updates...
	I1120 20:54:59.864889  278240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:54:59.866102  278240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:54:59.867392  278240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:54:59.868550  278240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:54:59.869682  278240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:54:59.871457  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:54:59.871972  278240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:54:59.895937  278240 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:54:59.896024  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:54:59.953310  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:54:59.943244297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:54:59.953450  278240 docker.go:319] overlay module found
	I1120 20:54:59.955196  278240 out.go:179] * Using the docker driver based on existing profile
	I1120 20:54:59.956312  278240 start.go:309] selected driver: docker
	I1120 20:54:59.956329  278240 start.go:930] validating driver "docker" against &{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:54:59.956444  278240 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:54:59.956970  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:55:00.019097  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:55:00.008082303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:55:00.019426  278240 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:00.019462  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:00.019528  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:00.019596  278240 start.go:353] cluster config:
	{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:00.021170  278240 out.go:179] * Starting "newest-cni-439796" primary control-plane node in "newest-cni-439796" cluster
	I1120 20:55:00.022241  278240 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:55:00.023448  278240 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:55:00.024648  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:00.024678  278240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1120 20:55:00.024688  278240 cache.go:65] Caching tarball of preloaded images
	I1120 20:55:00.024751  278240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:55:00.024781  278240 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:55:00.024793  278240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 20:55:00.024892  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.047349  278240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:55:00.047385  278240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:55:00.047421  278240 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:55:00.047453  278240 start.go:360] acquireMachinesLock for newest-cni-439796: {Name:mkd377b5021ac8b488b2c648334cf58462a4dda8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:55:00.047519  278240 start.go:364] duration metric: took 41.671µs to acquireMachinesLock for "newest-cni-439796"
	I1120 20:55:00.047542  278240 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:55:00.047552  278240 fix.go:54] fixHost starting: 
	I1120 20:55:00.047793  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.066752  278240 fix.go:112] recreateIfNeeded on newest-cni-439796: state=Stopped err=<nil>
	W1120 20:55:00.066782  278240 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 20:54:59.168958  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	W1120 20:55:01.169101  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	I1120 20:55:01.669497  267938 node_ready.go:49] node "default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:01.669530  267938 node_ready.go:38] duration metric: took 11.503696878s for node "default-k8s-diff-port-053182" to be "Ready" ...
	I1120 20:55:01.669547  267938 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:01.669608  267938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:01.684444  267938 api_server.go:72] duration metric: took 11.853641818s to wait for apiserver process to appear ...
	I1120 20:55:01.684479  267938 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:01.684517  267938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1120 20:55:01.690782  267938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1120 20:55:01.691893  267938 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:01.691922  267938 api_server.go:131] duration metric: took 7.434681ms to wait for apiserver health ...
	I1120 20:55:01.691934  267938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:01.695775  267938 system_pods.go:59] 8 kube-system pods found
	I1120 20:55:01.695832  267938 system_pods.go:61] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.695845  267938 system_pods.go:61] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.695858  267938 system_pods.go:61] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.695873  267938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.695882  267938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.695888  267938 system_pods.go:61] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.695897  267938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.695905  267938 system_pods.go:61] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.695917  267938 system_pods.go:74] duration metric: took 3.975656ms to wait for pod list to return data ...
	I1120 20:55:01.695931  267938 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:01.702110  267938 default_sa.go:45] found service account: "default"
	I1120 20:55:01.702135  267938 default_sa.go:55] duration metric: took 6.196385ms for default service account to be created ...
	I1120 20:55:01.702146  267938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:55:01.796537  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:01.796576  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.796585  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.796599  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.796605  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.796610  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.796621  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.796626  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.796634  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.796670  267938 retry.go:31] will retry after 230.554359ms: missing components: kube-dns
	I1120 20:55:02.032424  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.032457  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:02.032465  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.032474  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.032479  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.032484  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.032489  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.032493  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.032500  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:02.032519  267938 retry.go:31] will retry after 327.025815ms: missing components: kube-dns
	I1120 20:55:02.365222  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.365305  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Running
	I1120 20:55:02.365316  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.365326  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.365334  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.365351  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.365357  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.365363  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.365394  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Running
	I1120 20:55:02.365405  267938 system_pods.go:126] duration metric: took 663.251244ms to wait for k8s-apps to be running ...
	I1120 20:55:02.365435  267938 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:55:02.365836  267938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:55:02.379259  267938 system_svc.go:56] duration metric: took 13.837433ms WaitForService to wait for kubelet
	I1120 20:55:02.379293  267938 kubeadm.go:587] duration metric: took 12.548497918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:55:02.379319  267938 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:02.382189  267938 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:02.382219  267938 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:02.382231  267938 node_conditions.go:105] duration metric: took 2.905948ms to run NodePressure ...
	I1120 20:55:02.382244  267938 start.go:242] waiting for startup goroutines ...
	I1120 20:55:02.382254  267938 start.go:247] waiting for cluster config update ...
	I1120 20:55:02.382269  267938 start.go:256] writing updated cluster config ...
	I1120 20:55:02.382592  267938 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:02.386235  267938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:02.389651  267938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.393726  267938 pod_ready.go:94] pod "coredns-66bc5c9577-m5kfb" is "Ready"
	I1120 20:55:02.393745  267938 pod_ready.go:86] duration metric: took 4.074153ms for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.395689  267938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.399316  267938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.399335  267938 pod_ready.go:86] duration metric: took 3.628858ms for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.401248  267938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.404743  267938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.404759  267938 pod_ready.go:86] duration metric: took 3.496456ms for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.406414  267938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.790539  267938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.790573  267938 pod_ready.go:86] duration metric: took 384.138389ms for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.990773  267938 pod_ready.go:83] waiting for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.390942  267938 pod_ready.go:94] pod "kube-proxy-9dwtf" is "Ready"
	I1120 20:55:03.390966  267938 pod_ready.go:86] duration metric: took 400.162298ms for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.591644  267938 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990591  267938 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:03.990620  267938 pod_ready.go:86] duration metric: took 398.945663ms for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990634  267938 pod_ready.go:40] duration metric: took 1.604373018s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:04.040872  267938 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:55:04.046253  267938 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-053182" cluster and "default" namespace by default
	I1120 20:55:00.068525  278240 out.go:252] * Restarting existing docker container for "newest-cni-439796" ...
	I1120 20:55:00.068597  278240 cli_runner.go:164] Run: docker start newest-cni-439796
	I1120 20:55:00.341240  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.361218  278240 kic.go:430] container "newest-cni-439796" state is running.
	I1120 20:55:00.361592  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:00.380436  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.380646  278240 machine.go:94] provisionDockerMachine start ...
	I1120 20:55:00.380703  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:00.399740  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:00.399992  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:00.400005  278240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:55:00.400638  278240 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44044->127.0.0.1:33094: read: connection reset by peer
	I1120 20:55:03.537357  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.537416  278240 ubuntu.go:182] provisioning hostname "newest-cni-439796"
	I1120 20:55:03.537490  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.564681  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.565007  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.565025  278240 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-439796 && echo "newest-cni-439796" | sudo tee /etc/hostname
	I1120 20:55:03.714348  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.714449  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.733081  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.733307  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.733326  278240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439796/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:55:03.870069  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:55:03.870099  278240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3769/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3769/.minikube}
	I1120 20:55:03.870136  278240 ubuntu.go:190] setting up certificates
	I1120 20:55:03.870148  278240 provision.go:84] configureAuth start
	I1120 20:55:03.870204  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:03.888998  278240 provision.go:143] copyHostCerts
	I1120 20:55:03.889072  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem, removing ...
	I1120 20:55:03.889086  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem
	I1120 20:55:03.889169  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem (1082 bytes)
	I1120 20:55:03.889364  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem, removing ...
	I1120 20:55:03.889391  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem
	I1120 20:55:03.889436  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem (1123 bytes)
	I1120 20:55:03.889525  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem, removing ...
	I1120 20:55:03.889536  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem
	I1120 20:55:03.889569  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem (1679 bytes)
	I1120 20:55:03.889647  278240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439796 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-439796]
	I1120 20:55:04.066966  278240 provision.go:177] copyRemoteCerts
	I1120 20:55:04.067036  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:55:04.067080  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.090856  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.196925  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:55:04.217358  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 20:55:04.242617  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:55:04.262514  278240 provision.go:87] duration metric: took 392.354465ms to configureAuth
	I1120 20:55:04.262545  278240 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:55:04.262716  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:55:04.262727  278240 machine.go:97] duration metric: took 3.882068475s to provisionDockerMachine
	I1120 20:55:04.262735  278240 start.go:293] postStartSetup for "newest-cni-439796" (driver="docker")
	I1120 20:55:04.262744  278240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:55:04.262787  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:55:04.262830  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.283586  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.382700  278240 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:55:04.386689  278240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:55:04.386720  278240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:55:04.386734  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/addons for local assets ...
	I1120 20:55:04.386784  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/files for local assets ...
	I1120 20:55:04.386890  278240 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem -> 77312.pem in /etc/ssl/certs
	I1120 20:55:04.387094  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:55:04.395171  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:55:04.412782  278240 start.go:296] duration metric: took 150.034316ms for postStartSetup
	I1120 20:55:04.412864  278240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:55:04.412910  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.433695  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.530336  278240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:55:04.535206  278240 fix.go:56] duration metric: took 4.48764827s for fixHost
	I1120 20:55:04.535232  278240 start.go:83] releasing machines lock for "newest-cni-439796", held for 4.487699701s
	I1120 20:55:04.535302  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:04.557073  278240 ssh_runner.go:195] Run: cat /version.json
	I1120 20:55:04.557151  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.557181  278240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:55:04.557249  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.579766  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.580774  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.679945  278240 ssh_runner.go:195] Run: systemctl --version
	I1120 20:55:04.743090  278240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:55:04.748524  278240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:55:04.748593  278240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:55:04.757428  278240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:55:04.757454  278240 start.go:496] detecting cgroup driver to use...
	I1120 20:55:04.757485  278240 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:55:04.757548  278240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 20:55:04.776538  278240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 20:55:04.791147  278240 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:55:04.791216  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:55:04.809821  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:55:04.824474  278240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d3441ee2499da       56cc512116c8f       8 seconds ago       Running             busybox                   0                   6281561cd9d13       busybox                                      default
	9aadc48bf3961       52546a367cc9e       13 seconds ago      Running             coredns                   0                   b2bf7b2d8cf4c       coredns-66bc5c9577-x7zhn                     kube-system
	6e397d99719e8       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   b52812d341c06       storage-provisioner                          kube-system
	b3b1249873abc       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   13d37228a371a       kindnet-2hlth                                kube-system
	63121e4af9723       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   5e5b66e111e34       kube-proxy-72rnp                             kube-system
	5efb0a99caac2       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   b0e7764dfffcd       kube-controller-manager-embed-certs-954820   kube-system
	683dc01ab1049       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   98c6eb0af96be       kube-scheduler-embed-certs-954820            kube-system
	ecb7ea8c22d19       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   1d2184c6b581d       etcd-embed-certs-954820                      kube-system
	701bc97da45da       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   5749b9fa3c2f8       kube-apiserver-embed-certs-954820            kube-system
	
	
	==> containerd <==
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.909722408Z" level=info msg="CreateContainer within sandbox \"b52812d341c067a9f2a2c0c902b0f0560c86f174f65b3d840cb58a0337581c65\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.910799455Z" level=info msg="StartContainer for \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.911690129Z" level=info msg="connecting to shim 6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1" address="unix:///run/containerd/s/0dd8ef32bd568491c3581c978a6c6c7442fd8f675f0f8f9c937557d5224671dc" protocol=ttrpc version=3
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.913398654Z" level=info msg="Container 9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.919488296Z" level=info msg="CreateContainer within sandbox \"b2bf7b2d8cf4ccd0c89e1735e79d5dfb3f28bb95bba6d34ba28a4acefe760c6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.920157624Z" level=info msg="StartContainer for \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\""
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.921197058Z" level=info msg="connecting to shim 9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784" address="unix:///run/containerd/s/c0ea3e1f21d52782fe126467faec3b36dfefcec2df93781e2633c92f90bdf11a" protocol=ttrpc version=3
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.963720184Z" level=info msg="StartContainer for \"6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1\" returns successfully"
	Nov 20 20:54:52 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:52.971510367Z" level=info msg="StartContainer for \"9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784\" returns successfully"
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.477244729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c1920ad7-2d95-4409-be9d-031c42380cd6,Namespace:default,Attempt:0,}"
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.516728969Z" level=info msg="connecting to shim 6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165" address="unix:///run/containerd/s/1a618de0669581369747a58490a394aaf623b9fd1470a4ce67e6405fae4199a6" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.584777884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c1920ad7-2d95-4409-be9d-031c42380cd6,Namespace:default,Attempt:0,} returns sandbox id \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\""
	Nov 20 20:54:55 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:55.587109491Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.235816622Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.236670481Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.237898816Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.239765592Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.240194452Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.653048294s"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.240233567Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.246970289Z" level=info msg="CreateContainer within sandbox \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.253552627Z" level=info msg="Container d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.262961093Z" level=info msg="CreateContainer within sandbox \"6281561cd9d13f57664cd402e6c498fd9f9fa1dea9e8ecce53c47212067df165\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.263688911Z" level=info msg="StartContainer for \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\""
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.264725431Z" level=info msg="connecting to shim d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd" address="unix:///run/containerd/s/1a618de0669581369747a58490a394aaf623b9fd1470a4ce67e6405fae4199a6" protocol=ttrpc version=3
	Nov 20 20:54:57 embed-certs-954820 containerd[664]: time="2025-11-20T20:54:57.314647182Z" level=info msg="StartContainer for \"d3441ee2499da6faa7dde6934067483d46e71e6dc5a9056cda37187cf77cdedd\" returns successfully"
	
	
	==> coredns [9aadc48bf39619a170cd3c9e979ba2f4fcb645da619b9c1507dad3e583fcd784] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44570 - 62339 "HINFO IN 5223488822803614498.4173452436444296566. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051798526s
	
	
	==> describe nodes <==
	Name:               embed-certs-954820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-954820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-954820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_54_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:54:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-954820
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:54:52 +0000   Thu, 20 Nov 2025 20:54:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-954820
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                0ccc10b4-9e1e-496b-be58-89da7f82552b
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-x7zhn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-954820                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-2hlth                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-954820             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-954820    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-72rnp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-954820             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node embed-certs-954820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node embed-certs-954820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node embed-certs-954820 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-954820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-954820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-954820 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-954820 event: Registered Node embed-certs-954820 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-954820 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ecb7ea8c22d19c62a975927d40533449c3000063ebed8bf1f3946e15a961f8f5] <==
	{"level":"warn","ts":"2025-11-20T20:54:33.655999Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.265402Z","time spent":"390.55134ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-system-service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/kube-system-service-accounts\" value_size:634 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T20:54:33.656065Z","caller":"traceutil/trace.go:172","msg":"trace[421027544] transaction","detail":"{read_only:false; response_revision:62; number_of_response:1; }","duration":"389.473516ms","start":"2025-11-20T20:54:33.266580Z","end":"2025-11-20T20:54:33.656054Z","steps":["trace[421027544] 'process raft request'  (duration: 389.23367ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:33.656104Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266831Z","time spent":"389.183098ms","remote":"127.0.0.1:57110","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":464,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T20:54:33.656134Z","caller":"traceutil/trace.go:172","msg":"trace[393182147] transaction","detail":"{read_only:false; response_revision:63; number_of_response:1; }","duration":"389.354543ms","start":"2025-11-20T20:54:33.266771Z","end":"2025-11-20T20:54:33.656126Z","steps":["trace[393182147] 'process raft request'  (duration: 389.106273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:33.656303Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266565Z","time spent":"389.521731ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-controller-manager\" mod_revision:55 > success:<request_put:<key:\"/registry/flowschemas/kube-controller-manager\" value_size:1048 >> failure:<request_range:<key:\"/registry/flowschemas/kube-controller-manager\" > >"}
	{"level":"warn","ts":"2025-11-20T20:54:33.656441Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.266732Z","time spent":"389.416788ms","remote":"127.0.0.1:56538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":712,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" mod_revision:49 > success:<request_put:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" value_size:634 lease:499225158781226421 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" > >"}
	{"level":"info","ts":"2025-11-20T20:54:33.872888Z","caller":"traceutil/trace.go:172","msg":"trace[1568779350] linearizableReadLoop","detail":"{readStateIndex:70; appliedIndex:70; }","duration":"140.726728ms","start":"2025-11-20T20:54:33.732139Z","end":"2025-11-20T20:54:33.872865Z","steps":["trace[1568779350] 'read index received'  (duration: 140.710723ms)","trace[1568779350] 'applied index is now lower than readState.Index'  (duration: 6.178µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.983721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.560052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:33.983804Z","caller":"traceutil/trace.go:172","msg":"trace[879466238] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:65; }","duration":"251.645213ms","start":"2025-11-20T20:54:33.732131Z","end":"2025-11-20T20:54:33.983776Z","steps":["trace[879466238] 'agreement among raft nodes before linearized reading'  (duration: 140.809251ms)","trace[879466238] 'range keys from in-memory index tree'  (duration: 110.702028ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.983840Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.904659ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597195636002270 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/service-accounts\" value_size:615 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-20T20:54:33.983990Z","caller":"traceutil/trace.go:172","msg":"trace[1173711927] transaction","detail":"{read_only:false; response_revision:67; number_of_response:1; }","duration":"322.851107ms","start":"2025-11-20T20:54:33.661126Z","end":"2025-11-20T20:54:33.983978Z","steps":["trace[1173711927] 'process raft request'  (duration: 322.78296ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:33.984011Z","caller":"traceutil/trace.go:172","msg":"trace[1971353058] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"323.496231ms","start":"2025-11-20T20:54:33.660491Z","end":"2025-11-20T20:54:33.983987Z","steps":["trace[1971353058] 'process raft request'  (duration: 212.394278ms)","trace[1971353058] 'compare'  (duration: 110.798457ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:33.984094Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.660476Z","time spent":"323.576413ms","remote":"127.0.0.1:57210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":661,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/service-accounts\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/service-accounts\" value_size:615 >> failure:<>"}
	{"level":"warn","ts":"2025-11-20T20:54:33.984125Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:54:33.661112Z","time spent":"322.934334ms","remote":"127.0.0.1:56538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":708,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" mod_revision:51 > success:<request_put:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" value_size:630 lease:499225158781226421 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-954820.1879d2671b5896e5\" > >"}
	{"level":"info","ts":"2025-11-20T20:54:33.984721Z","caller":"traceutil/trace.go:172","msg":"trace[1382470456] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"251.064082ms","start":"2025-11-20T20:54:33.733648Z","end":"2025-11-20T20:54:33.984712Z","steps":["trace[1382470456] 'process raft request'  (duration: 250.989694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:34.091705Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.660425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:34.091771Z","caller":"traceutil/trace.go:172","msg":"trace[712806050] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:0; response_revision:69; }","duration":"103.745711ms","start":"2025-11-20T20:54:33.988012Z","end":"2025-11-20T20:54:34.091757Z","steps":["trace[712806050] 'agreement among raft nodes before linearized reading'  (duration: 99.649158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:34.091897Z","caller":"traceutil/trace.go:172","msg":"trace[1320144489] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"103.4332ms","start":"2025-11-20T20:54:33.988450Z","end":"2025-11-20T20:54:34.091883Z","steps":["trace[1320144489] 'process raft request'  (duration: 99.278006ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:54:34.220339Z","caller":"traceutil/trace.go:172","msg":"trace[872629498] linearizableReadLoop","detail":"{readStateIndex:77; appliedIndex:77; }","duration":"124.786666ms","start":"2025-11-20T20:54:34.095519Z","end":"2025-11-20T20:54:34.220306Z","steps":["trace[872629498] 'read index received'  (duration: 124.779173ms)","trace[872629498] 'applied index is now lower than readState.Index'  (duration: 6.506µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:54:34.311024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.483473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/cluster-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-20T20:54:34.311081Z","caller":"traceutil/trace.go:172","msg":"trace[1168485939] range","detail":"{range_begin:/registry/clusterroles/cluster-admin; range_end:; response_count:0; response_revision:72; }","duration":"215.552305ms","start":"2025-11-20T20:54:34.095515Z","end":"2025-11-20T20:54:34.311067Z","steps":["trace[1168485939] 'agreement among raft nodes before linearized reading'  (duration: 124.879214ms)","trace[1168485939] 'range keys from in-memory index tree'  (duration: 90.57366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:54:34.311236Z","caller":"traceutil/trace.go:172","msg":"trace[1296496767] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"216.147442ms","start":"2025-11-20T20:54:34.095066Z","end":"2025-11-20T20:54:34.311213Z","steps":["trace[1296496767] 'process raft request'  (duration: 125.290959ms)","trace[1296496767] 'compare'  (duration: 90.641689ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:54:34.311276Z","caller":"traceutil/trace.go:172","msg":"trace[1931010025] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"215.098536ms","start":"2025-11-20T20:54:34.096165Z","end":"2025-11-20T20:54:34.311263Z","steps":["trace[1931010025] 'process raft request'  (duration: 215.025132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:54:34.311465Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.241207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-954820.1879d2671b5869cd\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-11-20T20:54:34.311807Z","caller":"traceutil/trace.go:172","msg":"trace[862884614] range","detail":"{range_begin:/registry/events/default/embed-certs-954820.1879d2671b5869cd; range_end:; response_count:1; response_revision:74; }","duration":"215.574884ms","start":"2025-11-20T20:54:34.096206Z","end":"2025-11-20T20:54:34.311781Z","steps":["trace[862884614] 'agreement among raft nodes before linearized reading'  (duration: 215.134379ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:55:06 up 37 min,  0 user,  load average: 2.92, 2.85, 2.01
	Linux embed-certs-954820 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3b1249873abc14f954f4d62e1593f12aec504feca9af1318cca0a9faa273bea] <==
	I1120 20:54:42.250570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:54:42.250844       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 20:54:42.250983       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:54:42.251000       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:54:42.251013       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:54:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:54:42.450873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:54:42.450934       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:54:42.450953       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:54:42.550820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:54:42.850768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:54:42.850837       1 metrics.go:72] Registering metrics
	I1120 20:54:42.850925       1 controller.go:711] "Syncing nftables rules"
	I1120 20:54:52.451636       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:54:52.451720       1 main.go:301] handling current node
	I1120 20:55:02.452952       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 20:55:02.453003       1 main.go:301] handling current node
	
	
	==> kube-apiserver [701bc97da45dafe69924b7d0298663b307e0de8bce555758070a8aaab74b7b28] <==
	I1120 20:54:32.169681       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:54:32.169954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 20:54:32.300573       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:54:32.300927       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:32.302502       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:54:32.396542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:32.398747       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:54:33.263847       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:54:33.657319       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:54:33.657347       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:54:34.877920       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:54:34.923111       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:54:35.067806       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:54:35.078987       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 20:54:35.080439       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:54:35.086563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:54:35.114475       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:54:36.169550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:54:36.178406       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:54:36.186005       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:54:40.417944       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:40.424653       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:40.867543       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:54:41.113895       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 20:55:03.279617       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54524: use of closed network connection
	
	
	==> kube-controller-manager [5efb0a99caac24797372cd4ce9ed52e65067ca32732e38425559b88fefc42127] <==
	I1120 20:54:40.110484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 20:54:40.110686       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:54:40.110694       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 20:54:40.110724       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:54:40.110825       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 20:54:40.111095       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:54:40.111112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 20:54:40.111247       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 20:54:40.111316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 20:54:40.112048       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 20:54:40.115326       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:54:40.115447       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 20:54:40.115490       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 20:54:40.115528       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 20:54:40.115535       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 20:54:40.115542       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 20:54:40.119893       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:54:40.119929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:40.121161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:40.122964       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-954820" podCIDRs=["10.244.0.0/24"]
	I1120 20:54:40.128032       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:54:40.136430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 20:54:40.136721       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:54:40.159953       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:54:55.063154       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [63121e4af97231bbe9817cb5d8f21daa3e6e2de9fd7bc9742aa8901ad2361c5d] <==
	I1120 20:54:41.870745       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:54:41.940522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:54:42.040822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:54:42.040862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 20:54:42.040978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:54:42.066003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:54:42.066062       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:54:42.073247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:54:42.073657       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:54:42.073760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:54:42.075578       1 config.go:309] "Starting node config controller"
	I1120 20:54:42.075669       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:54:42.075691       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:54:42.075725       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:54:42.075748       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:54:42.075753       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:54:42.075699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:54:42.075679       1 config.go:200] "Starting service config controller"
	I1120 20:54:42.075994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:54:42.176571       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:54:42.176600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:54:42.176588       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [683dc01ab10495c1b23b1e9c040e2c5ee29653a0c6b195c45f5b4e9618ef8227] <==
	E1120 20:54:32.626330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:32.626527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:32.626565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:32.626644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:32.626717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:33.453735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:33.514350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:33.568045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:54:33.719457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:54:33.778994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:54:33.859626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:54:33.877065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:54:33.885469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:33.905995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:33.911253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:33.930845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:34.074623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:34.128314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:34.145778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:54:34.180247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:54:34.185646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:34.196008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:54:34.209391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:34.209511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1120 20:54:36.022656       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.081964    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-954820" podStartSLOduration=2.081939096 podStartE2EDuration="2.081939096s" podCreationTimestamp="2025-11-20 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.071390641 +0000 UTC m=+1.142814919" watchObservedRunningTime="2025-11-20 20:54:37.081939096 +0000 UTC m=+1.153363368"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.095236    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-954820" podStartSLOduration=1.095214286 podStartE2EDuration="1.095214286s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.082168744 +0000 UTC m=+1.153593025" watchObservedRunningTime="2025-11-20 20:54:37.095214286 +0000 UTC m=+1.166638567"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.095430    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-954820" podStartSLOduration=1.095419442 podStartE2EDuration="1.095419442s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.094754286 +0000 UTC m=+1.166178569" watchObservedRunningTime="2025-11-20 20:54:37.095419442 +0000 UTC m=+1.166843722"
	Nov 20 20:54:37 embed-certs-954820 kubelet[1447]: I1120 20:54:37.110305    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-954820" podStartSLOduration=1.110258381 podStartE2EDuration="1.110258381s" podCreationTimestamp="2025-11-20 20:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:37.110028883 +0000 UTC m=+1.181453164" watchObservedRunningTime="2025-11-20 20:54:37.110258381 +0000 UTC m=+1.181682662"
	Nov 20 20:54:40 embed-certs-954820 kubelet[1447]: I1120 20:54:40.169620    1447 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:54:40 embed-certs-954820 kubelet[1447]: I1120 20:54:40.170257    1447 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232250    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6ff\" (UniqueName: \"kubernetes.io/projected/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-kube-api-access-6q6ff\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232301    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-xtables-lock\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232333    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-lib-modules\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232467    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-kube-proxy\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232522    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-xtables-lock\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232548    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctng\" (UniqueName: \"kubernetes.io/projected/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-kube-api-access-9ctng\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232572    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0a1b73-7d4e-45dd-b4b2-21c1af727959-lib-modules\") pod \"kube-proxy-72rnp\" (UID: \"cf0a1b73-7d4e-45dd-b4b2-21c1af727959\") " pod="kube-system/kube-proxy-72rnp"
	Nov 20 20:54:41 embed-certs-954820 kubelet[1447]: I1120 20:54:41.232594    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c4de21d7-92f8-41cb-b1b5-d6666a519ec0-cni-cfg\") pod \"kindnet-2hlth\" (UID: \"c4de21d7-92f8-41cb-b1b5-d6666a519ec0\") " pod="kube-system/kindnet-2hlth"
	Nov 20 20:54:42 embed-certs-954820 kubelet[1447]: I1120 20:54:42.068136    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72rnp" podStartSLOduration=1.068118226 podStartE2EDuration="1.068118226s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:42.067919474 +0000 UTC m=+6.139343756" watchObservedRunningTime="2025-11-20 20:54:42.068118226 +0000 UTC m=+6.139542510"
	Nov 20 20:54:42 embed-certs-954820 kubelet[1447]: I1120 20:54:42.081691    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2hlth" podStartSLOduration=1.081670062 podStartE2EDuration="1.081670062s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:42.081528883 +0000 UTC m=+6.152953170" watchObservedRunningTime="2025-11-20 20:54:42.081670062 +0000 UTC m=+6.153094344"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.464035    1447 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.505965    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c10e173a-b2ad-4834-be87-fe5f82ee3e43-config-volume\") pod \"coredns-66bc5c9577-x7zhn\" (UID: \"c10e173a-b2ad-4834-be87-fe5f82ee3e43\") " pod="kube-system/coredns-66bc5c9577-x7zhn"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506005    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff-tmp\") pod \"storage-provisioner\" (UID: \"96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff\") " pod="kube-system/storage-provisioner"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506020    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25v2\" (UniqueName: \"kubernetes.io/projected/96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff-kube-api-access-h25v2\") pod \"storage-provisioner\" (UID: \"96efca46-ed2d-49a7-a2b3-dcf64dcdc6ff\") " pod="kube-system/storage-provisioner"
	Nov 20 20:54:52 embed-certs-954820 kubelet[1447]: I1120 20:54:52.506047    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgswv\" (UniqueName: \"kubernetes.io/projected/c10e173a-b2ad-4834-be87-fe5f82ee3e43-kube-api-access-jgswv\") pod \"coredns-66bc5c9577-x7zhn\" (UID: \"c10e173a-b2ad-4834-be87-fe5f82ee3e43\") " pod="kube-system/coredns-66bc5c9577-x7zhn"
	Nov 20 20:54:53 embed-certs-954820 kubelet[1447]: I1120 20:54:53.098328    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x7zhn" podStartSLOduration=12.098307413 podStartE2EDuration="12.098307413s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:53.098064317 +0000 UTC m=+17.169488598" watchObservedRunningTime="2025-11-20 20:54:53.098307413 +0000 UTC m=+17.169731694"
	Nov 20 20:54:53 embed-certs-954820 kubelet[1447]: I1120 20:54:53.123672    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.123649616 podStartE2EDuration="12.123649616s" podCreationTimestamp="2025-11-20 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:53.123425154 +0000 UTC m=+17.194849435" watchObservedRunningTime="2025-11-20 20:54:53.123649616 +0000 UTC m=+17.195073896"
	Nov 20 20:54:55 embed-certs-954820 kubelet[1447]: I1120 20:54:55.220629    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn4w\" (UniqueName: \"kubernetes.io/projected/c1920ad7-2d95-4409-be9d-031c42380cd6-kube-api-access-dcn4w\") pod \"busybox\" (UID: \"c1920ad7-2d95-4409-be9d-031c42380cd6\") " pod="default/busybox"
	Nov 20 20:54:58 embed-certs-954820 kubelet[1447]: I1120 20:54:58.113049    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.458328498 podStartE2EDuration="3.113032987s" podCreationTimestamp="2025-11-20 20:54:55 +0000 UTC" firstStartedPulling="2025-11-20 20:54:55.586543568 +0000 UTC m=+19.657967851" lastFinishedPulling="2025-11-20 20:54:57.24124808 +0000 UTC m=+21.312672340" observedRunningTime="2025-11-20 20:54:58.112606632 +0000 UTC m=+22.184030915" watchObservedRunningTime="2025-11-20 20:54:58.113032987 +0000 UTC m=+22.184457267"
	
	
	==> storage-provisioner [6e397d99719e867eabf07e16038fb6612f5d6d9476c297caa9e75e84dd8995f1] <==
	I1120 20:54:52.971091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:54:52.979143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:54:52.979195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:54:52.981637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:52.986154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:54:52.986383       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:54:52.986546       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e8b54b-ab56-43aa-b79e-b9127295cd5b", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f became leader
	I1120 20:54:52.986591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f!
	W1120 20:54:52.989596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:52.994171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:54:53.087220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-954820_a23d7f85-96cc-407a-929a-863a309b458f!
	W1120 20:54:54.997913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:55.002721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:57.005861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:57.010386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:59.014309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:54:59.018894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.021950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.027190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.030593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.034339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.037539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.041560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-954820 -n embed-certs-954820
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-954820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7fdd532-26fc-4206-b10a-0b4b374325ee] Pending
helpers_test.go:352: "busybox" [d7fdd532-26fc-4206-b10a-0b4b374325ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7fdd532-26fc-4206-b10a-0b4b374325ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004699611s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-053182
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-053182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab",
	        "Created": "2025-11-20T20:54:27.695157679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:54:27.734457835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/hostname",
	        "HostsPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/hosts",
	        "LogPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab-json.log",
	        "Name": "/default-k8s-diff-port-053182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-053182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-053182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab",
	                "LowerDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-053182",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-053182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-053182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-053182",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-053182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "614313afd378b4568997eaf040b0cdf2f33329765d4a8b736a177852cdfd97f6",
	            "SandboxKey": "/var/run/docker/netns/614313afd378",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-053182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f214e15c73bb9c6c638c72095a989fd20575dded2cc6854dc6057351fd56bb9",
	                    "EndpointID": "e5eba6478597fd9e4de5cb5d1ddd50d38f1c208068b4c53a5730a85020143776",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "be:bf:c1:12:e9:7e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-053182",
	                        "963627da0e76"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-053182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-053182 logs -n 25: (1.111986791s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ image   │ old-k8s-version-715005 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ no-preload-480337 image list --format=json                                                                                                                                                                                                          │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p disable-driver-mounts-311936                                                                                                                                                                                                                     │ disable-driver-mounts-311936 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-053182 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:55 UTC │
	│ delete  │ -p cert-expiration-137718                                                                                                                                                                                                                           │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-439796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ stop    │ -p newest-cni-439796 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-439796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-954820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ stop    │ -p embed-certs-954820 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │                     │
	│ image   │ newest-cni-439796 image list --format=json                                                                                                                                                                                                          │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ pause   │ -p newest-cni-439796 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ unpause │ -p newest-cni-439796 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:54:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:54:59.857828  278240 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:54:59.858105  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858115  278240 out.go:374] Setting ErrFile to fd 2...
	I1120 20:54:59.858119  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858349  278240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:54:59.858826  278240 out.go:368] Setting JSON to false
	I1120 20:54:59.860194  278240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1763669848,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:54:59.860277  278240 start.go:143] virtualization: kvm guest
	I1120 20:54:59.862251  278240 out.go:179] * [newest-cni-439796] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:54:59.863664  278240 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:54:59.863667  278240 notify.go:221] Checking for updates...
	I1120 20:54:59.864889  278240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:54:59.866102  278240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:54:59.867392  278240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:54:59.868550  278240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:54:59.869682  278240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:54:59.871457  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:54:59.871972  278240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:54:59.895937  278240 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:54:59.896024  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:54:59.953310  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:54:59.943244297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:54:59.953450  278240 docker.go:319] overlay module found
	I1120 20:54:59.955196  278240 out.go:179] * Using the docker driver based on existing profile
	I1120 20:54:59.956312  278240 start.go:309] selected driver: docker
	I1120 20:54:59.956329  278240 start.go:930] validating driver "docker" against &{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:54:59.956444  278240 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:54:59.956970  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:55:00.019097  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:55:00.008082303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:55:00.019426  278240 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:00.019462  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:00.019528  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:00.019596  278240 start.go:353] cluster config:
	{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:00.021170  278240 out.go:179] * Starting "newest-cni-439796" primary control-plane node in "newest-cni-439796" cluster
	I1120 20:55:00.022241  278240 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:55:00.023448  278240 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:55:00.024648  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:00.024678  278240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1120 20:55:00.024688  278240 cache.go:65] Caching tarball of preloaded images
	I1120 20:55:00.024751  278240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:55:00.024781  278240 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:55:00.024793  278240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 20:55:00.024892  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.047349  278240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:55:00.047385  278240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:55:00.047421  278240 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:55:00.047453  278240 start.go:360] acquireMachinesLock for newest-cni-439796: {Name:mkd377b5021ac8b488b2c648334cf58462a4dda8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:55:00.047519  278240 start.go:364] duration metric: took 41.671µs to acquireMachinesLock for "newest-cni-439796"
	I1120 20:55:00.047542  278240 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:55:00.047552  278240 fix.go:54] fixHost starting: 
	I1120 20:55:00.047793  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.066752  278240 fix.go:112] recreateIfNeeded on newest-cni-439796: state=Stopped err=<nil>
	W1120 20:55:00.066782  278240 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 20:54:59.168958  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	W1120 20:55:01.169101  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	I1120 20:55:01.669497  267938 node_ready.go:49] node "default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:01.669530  267938 node_ready.go:38] duration metric: took 11.503696878s for node "default-k8s-diff-port-053182" to be "Ready" ...
	I1120 20:55:01.669547  267938 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:01.669608  267938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:01.684444  267938 api_server.go:72] duration metric: took 11.853641818s to wait for apiserver process to appear ...
	I1120 20:55:01.684479  267938 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:01.684517  267938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1120 20:55:01.690782  267938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1120 20:55:01.691893  267938 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:01.691922  267938 api_server.go:131] duration metric: took 7.434681ms to wait for apiserver health ...
	I1120 20:55:01.691934  267938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:01.695775  267938 system_pods.go:59] 8 kube-system pods found
	I1120 20:55:01.695832  267938 system_pods.go:61] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.695845  267938 system_pods.go:61] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.695858  267938 system_pods.go:61] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.695873  267938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.695882  267938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.695888  267938 system_pods.go:61] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.695897  267938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.695905  267938 system_pods.go:61] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.695917  267938 system_pods.go:74] duration metric: took 3.975656ms to wait for pod list to return data ...
	I1120 20:55:01.695931  267938 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:01.702110  267938 default_sa.go:45] found service account: "default"
	I1120 20:55:01.702135  267938 default_sa.go:55] duration metric: took 6.196385ms for default service account to be created ...
	I1120 20:55:01.702146  267938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:55:01.796537  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:01.796576  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.796585  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.796599  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.796605  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.796610  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.796621  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.796626  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.796634  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.796670  267938 retry.go:31] will retry after 230.554359ms: missing components: kube-dns
	I1120 20:55:02.032424  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.032457  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:02.032465  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.032474  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.032479  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.032484  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.032489  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.032493  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.032500  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:02.032519  267938 retry.go:31] will retry after 327.025815ms: missing components: kube-dns
	I1120 20:55:02.365222  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.365305  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Running
	I1120 20:55:02.365316  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.365326  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.365334  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.365351  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.365357  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.365363  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.365394  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Running
	I1120 20:55:02.365405  267938 system_pods.go:126] duration metric: took 663.251244ms to wait for k8s-apps to be running ...
	I1120 20:55:02.365435  267938 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:55:02.365836  267938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:55:02.379259  267938 system_svc.go:56] duration metric: took 13.837433ms WaitForService to wait for kubelet
	I1120 20:55:02.379293  267938 kubeadm.go:587] duration metric: took 12.548497918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:55:02.379319  267938 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:02.382189  267938 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:02.382219  267938 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:02.382231  267938 node_conditions.go:105] duration metric: took 2.905948ms to run NodePressure ...
	I1120 20:55:02.382244  267938 start.go:242] waiting for startup goroutines ...
	I1120 20:55:02.382254  267938 start.go:247] waiting for cluster config update ...
	I1120 20:55:02.382269  267938 start.go:256] writing updated cluster config ...
	I1120 20:55:02.382592  267938 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:02.386235  267938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:02.389651  267938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.393726  267938 pod_ready.go:94] pod "coredns-66bc5c9577-m5kfb" is "Ready"
	I1120 20:55:02.393745  267938 pod_ready.go:86] duration metric: took 4.074153ms for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.395689  267938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.399316  267938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.399335  267938 pod_ready.go:86] duration metric: took 3.628858ms for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.401248  267938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.404743  267938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.404759  267938 pod_ready.go:86] duration metric: took 3.496456ms for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.406414  267938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.790539  267938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.790573  267938 pod_ready.go:86] duration metric: took 384.138389ms for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.990773  267938 pod_ready.go:83] waiting for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.390942  267938 pod_ready.go:94] pod "kube-proxy-9dwtf" is "Ready"
	I1120 20:55:03.390966  267938 pod_ready.go:86] duration metric: took 400.162298ms for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.591644  267938 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990591  267938 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:03.990620  267938 pod_ready.go:86] duration metric: took 398.945663ms for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990634  267938 pod_ready.go:40] duration metric: took 1.604373018s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:04.040872  267938 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:55:04.046253  267938 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-053182" cluster and "default" namespace by default
	I1120 20:55:00.068525  278240 out.go:252] * Restarting existing docker container for "newest-cni-439796" ...
	I1120 20:55:00.068597  278240 cli_runner.go:164] Run: docker start newest-cni-439796
	I1120 20:55:00.341240  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.361218  278240 kic.go:430] container "newest-cni-439796" state is running.
	I1120 20:55:00.361592  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:00.380436  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.380646  278240 machine.go:94] provisionDockerMachine start ...
	I1120 20:55:00.380703  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:00.399740  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:00.399992  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:00.400005  278240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:55:00.400638  278240 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44044->127.0.0.1:33094: read: connection reset by peer
	I1120 20:55:03.537357  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.537416  278240 ubuntu.go:182] provisioning hostname "newest-cni-439796"
	I1120 20:55:03.537490  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.564681  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.565007  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.565025  278240 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-439796 && echo "newest-cni-439796" | sudo tee /etc/hostname
	I1120 20:55:03.714348  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.714449  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.733081  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.733307  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.733326  278240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439796/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:55:03.870069  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:55:03.870099  278240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3769/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3769/.minikube}
	I1120 20:55:03.870136  278240 ubuntu.go:190] setting up certificates
	I1120 20:55:03.870148  278240 provision.go:84] configureAuth start
	I1120 20:55:03.870204  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:03.888998  278240 provision.go:143] copyHostCerts
	I1120 20:55:03.889072  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem, removing ...
	I1120 20:55:03.889086  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem
	I1120 20:55:03.889169  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem (1082 bytes)
	I1120 20:55:03.889364  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem, removing ...
	I1120 20:55:03.889391  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem
	I1120 20:55:03.889436  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem (1123 bytes)
	I1120 20:55:03.889525  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem, removing ...
	I1120 20:55:03.889536  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem
	I1120 20:55:03.889569  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem (1679 bytes)
	I1120 20:55:03.889647  278240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439796 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-439796]
	I1120 20:55:04.066966  278240 provision.go:177] copyRemoteCerts
	I1120 20:55:04.067036  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:55:04.067080  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.090856  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.196925  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:55:04.217358  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 20:55:04.242617  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:55:04.262514  278240 provision.go:87] duration metric: took 392.354465ms to configureAuth
	I1120 20:55:04.262545  278240 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:55:04.262716  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:55:04.262727  278240 machine.go:97] duration metric: took 3.882068475s to provisionDockerMachine
	I1120 20:55:04.262735  278240 start.go:293] postStartSetup for "newest-cni-439796" (driver="docker")
	I1120 20:55:04.262744  278240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:55:04.262787  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:55:04.262830  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.283586  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.382700  278240 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:55:04.386689  278240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:55:04.386720  278240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:55:04.386734  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/addons for local assets ...
	I1120 20:55:04.386784  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/files for local assets ...
	I1120 20:55:04.386890  278240 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem -> 77312.pem in /etc/ssl/certs
	I1120 20:55:04.387094  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:55:04.395171  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:55:04.412782  278240 start.go:296] duration metric: took 150.034316ms for postStartSetup
	I1120 20:55:04.412864  278240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:55:04.412910  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.433695  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.530336  278240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:55:04.535206  278240 fix.go:56] duration metric: took 4.48764827s for fixHost
	I1120 20:55:04.535232  278240 start.go:83] releasing machines lock for "newest-cni-439796", held for 4.487699701s
	I1120 20:55:04.535302  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:04.557073  278240 ssh_runner.go:195] Run: cat /version.json
	I1120 20:55:04.557151  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.557181  278240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:55:04.557249  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.579766  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.580774  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.679945  278240 ssh_runner.go:195] Run: systemctl --version
	I1120 20:55:04.743090  278240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:55:04.748524  278240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:55:04.748593  278240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:55:04.757428  278240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:55:04.757454  278240 start.go:496] detecting cgroup driver to use...
	I1120 20:55:04.757485  278240 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:55:04.757548  278240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 20:55:04.776538  278240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 20:55:04.791147  278240 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:55:04.791216  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:55:04.809821  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:55:04.824474  278240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:55:04.915359  278240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:55:05.005773  278240 docker.go:234] disabling docker service ...
	I1120 20:55:05.005848  278240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:55:05.022479  278240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:55:05.035295  278240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:55:05.127413  278240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:55:05.222594  278240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:55:05.237063  278240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:55:05.255195  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 20:55:05.265033  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 20:55:05.275404  278240 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1120 20:55:05.275476  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1120 20:55:05.286052  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:55:05.295782  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 20:55:05.304979  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:55:05.314472  278240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:55:05.323167  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 20:55:05.332745  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 20:55:05.342479  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 20:55:05.351858  278240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:55:05.359745  278240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:55:05.367752  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:05.471088  278240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 20:55:05.591585  278240 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 20:55:05.591681  278240 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 20:55:05.596088  278240 start.go:564] Will wait 60s for crictl version
	I1120 20:55:05.596147  278240 ssh_runner.go:195] Run: which crictl
	I1120 20:55:05.600407  278240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:55:05.629326  278240 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 20:55:05.629405  278240 ssh_runner.go:195] Run: containerd --version
	I1120 20:55:05.655318  278240 ssh_runner.go:195] Run: containerd --version
	I1120 20:55:05.684274  278240 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 20:55:05.685537  278240 cli_runner.go:164] Run: docker network inspect newest-cni-439796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:55:05.705933  278240 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1120 20:55:05.710592  278240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:55:05.723139  278240 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 20:55:05.724415  278240 kubeadm.go:884] updating cluster {Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:55:05.724553  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:05.724612  278240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:55:05.751395  278240 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 20:55:05.751418  278240 containerd.go:534] Images already preloaded, skipping extraction
	I1120 20:55:05.751465  278240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:55:05.779235  278240 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 20:55:05.779260  278240 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:55:05.779269  278240 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1120 20:55:05.779416  278240 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-439796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:55:05.779488  278240 ssh_runner.go:195] Run: sudo crictl info
	I1120 20:55:05.807554  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:05.807573  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:05.807589  278240 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 20:55:05.807612  278240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-439796 NodeName:newest-cni-439796 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:55:05.807739  278240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-439796"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:55:05.807802  278240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:55:05.817304  278240 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:55:05.817359  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:55:05.825631  278240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 20:55:05.840583  278240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:55:05.854420  278240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1120 20:55:05.869219  278240 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:55:05.873465  278240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:55:05.884129  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:05.974696  278240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:55:05.997065  278240 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796 for IP: 192.168.94.2
	I1120 20:55:05.997089  278240 certs.go:195] generating shared ca certs ...
	I1120 20:55:05.997109  278240 certs.go:227] acquiring lock for ca certs: {Name:mk775617087d2732283088aad08819408765453b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:05.997270  278240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key
	I1120 20:55:05.997317  278240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key
	I1120 20:55:05.997332  278240 certs.go:257] generating profile certs ...
	I1120 20:55:05.997481  278240 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/client.key
	I1120 20:55:05.997548  278240 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.key.2ac9c80b
	I1120 20:55:05.997601  278240 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.key
	I1120 20:55:05.997753  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem (1338 bytes)
	W1120 20:55:05.997800  278240 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731_empty.pem, impossibly tiny 0 bytes
	I1120 20:55:05.997813  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:55:05.997848  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:55:05.997903  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:55:05.997935  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem (1679 bytes)
	I1120 20:55:05.997975  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:55:05.998947  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:55:06.022257  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:55:06.047956  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:55:06.072926  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:55:06.099945  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 20:55:06.127812  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:55:06.152822  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:55:06.173240  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:55:06.194867  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /usr/share/ca-certificates/77312.pem (1708 bytes)
	I1120 20:55:06.217850  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:55:06.242132  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem --> /usr/share/ca-certificates/7731.pem (1338 bytes)
	I1120 20:55:06.263764  278240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:55:06.277765  278240 ssh_runner.go:195] Run: openssl version
	I1120 20:55:06.285624  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.294684  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77312.pem /etc/ssl/certs/77312.pem
	I1120 20:55:06.303738  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.308259  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:26 /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.308323  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.346444  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:55:06.354642  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.363400  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:55:06.371708  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.376138  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.376194  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.415213  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:55:06.423811  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.432016  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7731.pem /etc/ssl/certs/7731.pem
	I1120 20:55:06.440143  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.444748  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:26 /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.444813  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.482317  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:55:06.491206  278240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:55:06.495446  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:55:06.534473  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:55:06.588509  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:55:06.643817  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:55:06.703005  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:55:06.769940  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:55:06.836316  278240 kubeadm.go:401] StartCluster: {Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:06.836446  278240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 20:55:06.836516  278240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:55:06.885298  278240 cri.go:89] found id: "49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9"
	I1120 20:55:06.885324  278240 cri.go:89] found id: "b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43"
	I1120 20:55:06.885330  278240 cri.go:89] found id: "fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4"
	I1120 20:55:06.885335  278240 cri.go:89] found id: "89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406"
	I1120 20:55:06.885339  278240 cri.go:89] found id: "680d5ea55c3a1bcffab71661dcad66887fd3065ef54ae42dce5a22da37d85503"
	I1120 20:55:06.885344  278240 cri.go:89] found id: "b4b9911a652a9a0aab927183a3e56fa355872a9d79a72a255ac6a54f8ca414fd"
	I1120 20:55:06.885348  278240 cri.go:89] found id: "8fea5b58894fc92f826d414aa12f8a7b0531f4c497f699fd75d9676afa9f3b9c"
	I1120 20:55:06.885351  278240 cri.go:89] found id: "519979b0715f31a7d1ff9784de4371f78b61b8ce78aa037985a3206e5ebeff15"
	I1120 20:55:06.885355  278240 cri.go:89] found id: "5ff2d4262b7871a5f88a225f6d65dfba458b597ec7a310b7f50f56640e7e4845"
	I1120 20:55:06.885364  278240 cri.go:89] found id: "8c135e548e60296ebe8b92267fc334cc7f2086e45cea67ae14ad02b9bcc16a01"
	I1120 20:55:06.885378  278240 cri.go:89] found id: ""
	I1120 20:55:06.885427  278240 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1120 20:55:06.912701  278240 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9","pid":986,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9/rootfs","created":"2025-11-20T20:55:06.791401287Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"20142ccb0fa290322b21529c3fee9f5d"},"owner":"root"},{"ociVersion":"1.2.
1","id":"5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59/rootfs","created":"2025-11-20T20:55:06.641024515Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-439796_b1a9c49c8334b79aea52840a4e22a3ee","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-439796","io
.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b1a9c49c8334b79aea52840a4e22a3ee"},"owner":"root"},{"ociVersion":"1.2.1","id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","pid":866,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1/rootfs","created":"2025-11-20T20:55:06.645183115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-n
ewest-cni-439796_20142ccb0fa290322b21529c3fee9f5d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"20142ccb0fa290322b21529c3fee9f5d"},"owner":"root"},{"ociVersion":"1.2.1","id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","pid":810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196/rootfs","created":"2025-11-20T20:55:06.613077257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.san
dbox-id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-439796_302580d78efe025b0c5d637fd2421ce8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"302580d78efe025b0c5d637fd2421ce8"},"owner":"root"},{"ociVersion":"1.2.1","id":"89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406/rootfs","created":"2025-11-20T20:55:06.764071056Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3
.6.4-0","io.kubernetes.cri.sandbox-id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"302580d78efe025b0c5d637fd2421ce8"},"owner":"root"},{"ociVersion":"1.2.1","id":"b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43/rootfs","created":"2025-11-20T20:55:06.779306658Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","io.kuber
netes.cri.sandbox-name":"kube-scheduler-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"669024f429d0435f565278cbf491faff"},"owner":"root"},{"ociVersion":"1.2.1","id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54/rootfs","created":"2025-11-20T20:55:06.650187729Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","io.kubernetes.cri.sandbox-log-d
irectory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-439796_669024f429d0435f565278cbf491faff","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"669024f429d0435f565278cbf491faff"},"owner":"root"},{"ociVersion":"1.2.1","id":"fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4","pid":972,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4/rootfs","created":"2025-11-20T20:55:06.785297071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"5325fb115be6f954972b126cb1c8
3e40b17960f3a320d12907ac86451b2f7e59","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b1a9c49c8334b79aea52840a4e22a3ee"},"owner":"root"}]
	I1120 20:55:06.912936  278240 cri.go:126] list returned 8 containers
	I1120 20:55:06.912952  278240 cri.go:129] container: {ID:49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9 Status:running}
	I1120 20:55:06.912975  278240 cri.go:135] skipping {49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9 running}: state = "running", want "paused"
	I1120 20:55:06.912996  278240 cri.go:129] container: {ID:5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59 Status:running}
	I1120 20:55:06.913006  278240 cri.go:131] skipping 5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59 - not in ps
	I1120 20:55:06.913013  278240 cri.go:129] container: {ID:5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1 Status:running}
	I1120 20:55:06.913023  278240 cri.go:131] skipping 5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1 - not in ps
	I1120 20:55:06.913029  278240 cri.go:129] container: {ID:7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196 Status:running}
	I1120 20:55:06.913036  278240 cri.go:131] skipping 7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196 - not in ps
	I1120 20:55:06.913041  278240 cri.go:129] container: {ID:89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406 Status:running}
	I1120 20:55:06.913049  278240 cri.go:135] skipping {89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406 running}: state = "running", want "paused"
	I1120 20:55:06.913055  278240 cri.go:129] container: {ID:b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43 Status:running}
	I1120 20:55:06.913062  278240 cri.go:135] skipping {b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43 running}: state = "running", want "paused"
	I1120 20:55:06.913068  278240 cri.go:129] container: {ID:bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54 Status:running}
	I1120 20:55:06.913077  278240 cri.go:131] skipping bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54 - not in ps
	I1120 20:55:06.913086  278240 cri.go:129] container: {ID:fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4 Status:running}
	I1120 20:55:06.913095  278240 cri.go:135] skipping {fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4 running}: state = "running", want "paused"
	I1120 20:55:06.913145  278240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:55:06.921808  278240 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 20:55:06.921828  278240 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 20:55:06.921874  278240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 20:55:06.929779  278240 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:55:06.931066  278240 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-439796" does not appear in /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:55:06.932018  278240 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-3769/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-439796" cluster setting kubeconfig missing "newest-cni-439796" context setting]
	I1120 20:55:06.933358  278240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.935078  278240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 20:55:06.943760  278240 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1120 20:55:06.943796  278240 kubeadm.go:602] duration metric: took 21.961753ms to restartPrimaryControlPlane
	I1120 20:55:06.943806  278240 kubeadm.go:403] duration metric: took 107.500823ms to StartCluster
	I1120 20:55:06.943825  278240 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.943892  278240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:55:06.946094  278240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.946312  278240 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:55:06.946438  278240 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:55:06.946538  278240 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-439796"
	I1120 20:55:06.946556  278240 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-439796"
	W1120 20:55:06.946565  278240 addons.go:248] addon storage-provisioner should already be in state true
	I1120 20:55:06.946593  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.946618  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:55:06.946667  278240 addons.go:70] Setting default-storageclass=true in profile "newest-cni-439796"
	I1120 20:55:06.946678  278240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-439796"
	I1120 20:55:06.946897  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.946972  278240 addons.go:70] Setting metrics-server=true in profile "newest-cni-439796"
	I1120 20:55:06.946995  278240 addons.go:239] Setting addon metrics-server=true in "newest-cni-439796"
	W1120 20:55:06.947004  278240 addons.go:248] addon metrics-server should already be in state true
	I1120 20:55:06.947044  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.947068  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947240  278240 addons.go:70] Setting dashboard=true in profile "newest-cni-439796"
	I1120 20:55:06.947255  278240 addons.go:239] Setting addon dashboard=true in "newest-cni-439796"
	W1120 20:55:06.947263  278240 addons.go:248] addon dashboard should already be in state true
	I1120 20:55:06.947284  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.947519  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947766  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947975  278240 out.go:179] * Verifying Kubernetes components...
	I1120 20:55:06.949539  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:06.975930  278240 addons.go:239] Setting addon default-storageclass=true in "newest-cni-439796"
	W1120 20:55:06.976504  278240 addons.go:248] addon default-storageclass should already be in state true
	I1120 20:55:06.976588  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.977154  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.979879  278240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:55:06.981133  278240 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 20:55:06.981150  278240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:55:06.982302  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:55:06.982362  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:06.984348  278240 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 20:55:06.985351  278240 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1120 20:55:06.985403  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 20:55:06.985439  278240 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 20:55:06.985486  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:06.986410  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:55:06.986430  278240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:55:06.986485  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:07.024539  278240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:55:07.024564  278240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:55:07.024628  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:07.038061  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.038122  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.045334  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.077937  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.159586  278240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:55:07.173182  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:55:07.180976  278240 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:07.181133  278240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:07.197659  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:55:07.197684  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1120 20:55:07.214817  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 20:55:07.214847  278240 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 20:55:07.219698  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:55:07.236698  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:55:07.236739  278240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:55:07.238962  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 20:55:07.238988  278240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 20:55:07.261316  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 20:55:07.261398  278240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 20:55:07.263680  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:55:07.263699  278240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:55:07.283693  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:55:07.284024  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 20:55:07.284046  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 20:55:07.310201  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 20:55:07.310240  278240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 20:55:07.327656  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 20:55:07.327679  278240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 20:55:07.345555  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 20:55:07.345581  278240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 20:55:07.361901  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 20:55:07.361931  278240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 20:55:07.377645  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 20:55:07.377675  278240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 20:55:07.393648  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 20:55:09.287360  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11414184s)
	I1120 20:55:09.287460  278240 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.106306477s)
	I1120 20:55:09.287499  278240 api_server.go:72] duration metric: took 2.341156335s to wait for apiserver process to appear ...
	I1120 20:55:09.287502  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067775465s)
	I1120 20:55:09.287510  278240 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:09.287531  278240 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 20:55:09.287598  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.003872297s)
	I1120 20:55:09.287629  278240 addons.go:480] Verifying addon metrics-server=true in "newest-cni-439796"
	I1120 20:55:09.287713  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.894023771s)
	I1120 20:55:09.289364  278240 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-439796 addons enable metrics-server
	
	I1120 20:55:09.295012  278240 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 20:55:09.295051  278240 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 20:55:09.301006  278240 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1120 20:55:09.062184  231112 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060457942s)
	W1120 20:55:09.062238  231112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1120 20:55:09.062249  231112 logs.go:123] Gathering logs for kube-apiserver [cb7769bf1648b74c1a546d0f3e756ef05dafac966c3de96e017408ab4cd99787] ...
	I1120 20:55:09.062265  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7769bf1648b74c1a546d0f3e756ef05dafac966c3de96e017408ab4cd99787"
	I1120 20:55:09.113136  231112 logs.go:123] Gathering logs for kube-apiserver [db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2] ...
	I1120 20:55:09.113176  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	I1120 20:55:09.171511  231112 logs.go:123] Gathering logs for kube-scheduler [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25] ...
	I1120 20:55:09.171552  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:55:09.302312  278240 addons.go:515] duration metric: took 2.35588774s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1120 20:55:09.788644  278240 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 20:55:09.793157  278240 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1120 20:55:09.794197  278240 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:09.794224  278240 api_server.go:131] duration metric: took 506.70699ms to wait for apiserver health ...
	I1120 20:55:09.794233  278240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:09.797847  278240 system_pods.go:59] 9 kube-system pods found
	I1120 20:55:09.797876  278240 system_pods.go:61] "coredns-66bc5c9577-tq44x" [ff948205-df7c-4ef9-9f5c-477b2f9bd6c8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797883  278240 system_pods.go:61] "etcd-newest-cni-439796" [738bd57a-0cd4-4a8d-93f1-abf8fc4d015c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:55:09.797890  278240 system_pods.go:61] "kindnet-9l2rj" [34d86602-3732-4a7c-9dec-c38291019e51] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:55:09.797899  278240 system_pods.go:61] "kube-apiserver-newest-cni-439796" [41ca83b9-690c-49f2-b682-7c1260206c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:55:09.797904  278240 system_pods.go:61] "kube-controller-manager-newest-cni-439796" [2894784f-75ab-43dd-a891-4fd2db248b92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:55:09.797910  278240 system_pods.go:61] "kube-proxy-7vwkv" [b571700e-d4d8-4498-a70a-51e436c9b877] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:55:09.797922  278240 system_pods.go:61] "kube-scheduler-newest-cni-439796" [effcf967-37db-4b3d-b1cb-6faa7b0bc180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:55:09.797929  278240 system_pods.go:61] "metrics-server-746fcd58dc-h7b8q" [4129553e-525e-4b8e-91d1-a0b08db35488] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797934  278240 system_pods.go:61] "storage-provisioner" [9e5aa1c1-be78-4b77-a920-b640b885d141] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797945  278240 system_pods.go:74] duration metric: took 3.705397ms to wait for pod list to return data ...
	I1120 20:55:09.797951  278240 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:09.800263  278240 default_sa.go:45] found service account: "default"
	I1120 20:55:09.800281  278240 default_sa.go:55] duration metric: took 2.321435ms for default service account to be created ...
	I1120 20:55:09.800291  278240 kubeadm.go:587] duration metric: took 2.853949947s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:09.800306  278240 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:09.802485  278240 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:09.802507  278240 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:09.802517  278240 node_conditions.go:105] duration metric: took 2.206731ms to run NodePressure ...
	I1120 20:55:09.802527  278240 start.go:242] waiting for startup goroutines ...
	I1120 20:55:09.802534  278240 start.go:247] waiting for cluster config update ...
	I1120 20:55:09.802544  278240 start.go:256] writing updated cluster config ...
	I1120 20:55:09.802796  278240 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:09.862839  278240 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:55:09.864650  278240 out.go:179] * Done! kubectl is now configured to use "newest-cni-439796" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b49146ca82c2b       56cc512116c8f       7 seconds ago       Running             busybox                   0                   ff2cac73caa52       busybox                                                default
	2d3beb216cf8e       52546a367cc9e       11 seconds ago      Running             coredns                   0                   3fb02eb4fc65c       coredns-66bc5c9577-m5kfb                               kube-system
	9600e46673cac       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   748471ed2feae       storage-provisioner                                    kube-system
	19aee05378ba4       409467f978b4a       22 seconds ago      Running             kindnet-cni               0                   a20c2bc133c94       kindnet-sg6pg                                          kube-system
	29cd300441f9d       fc25172553d79       23 seconds ago      Running             kube-proxy                0                   670d86617d069       kube-proxy-9dwtf                                       kube-system
	10e893bdc3051       7dd6aaa1717ab       34 seconds ago      Running             kube-scheduler            0                   8bde2bb90b7ca       kube-scheduler-default-k8s-diff-port-053182            kube-system
	860d0852403df       c80c8dbafe7dd       34 seconds ago      Running             kube-controller-manager   0                   3f322568bd335       kube-controller-manager-default-k8s-diff-port-053182   kube-system
	3db382118c305       c3994bc696102       34 seconds ago      Running             kube-apiserver            0                   f7640c8a1781d       kube-apiserver-default-k8s-diff-port-053182            kube-system
	41b22f3183abd       5f1f5298c888d       34 seconds ago      Running             etcd                      0                   17d1476143614       etcd-default-k8s-diff-port-053182                      kube-system
	
	
	==> containerd <==
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.706098159Z" level=info msg="CreateContainer within sandbox \"748471ed2feae2dcc3604e24f1cd22d43af7cb4bb486af0bbd1c8a1920d159f3\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.706545780Z" level=info msg="StartContainer for \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.707294923Z" level=info msg="connecting to shim 9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837" address="unix:///run/containerd/s/cb06d401419f18cbc1fbf98ed0e8757ec1ebba25bb8006d2862165e7f6c2d548" protocol=ttrpc version=3
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.709383578Z" level=info msg="Container 2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.715180328Z" level=info msg="CreateContainer within sandbox \"3fb02eb4fc65ccaf37f39098545f350d6f6300b98ddf77db7df205de4d248e95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.715979482Z" level=info msg="StartContainer for \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.716921615Z" level=info msg="connecting to shim 2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896" address="unix:///run/containerd/s/7b7c0c89af559364c2d129f7e92b91267db84fc4fac828ab8cfaf61458db82be" protocol=ttrpc version=3
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.765359161Z" level=info msg="StartContainer for \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\" returns successfully"
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.773693182Z" level=info msg="StartContainer for \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\" returns successfully"
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.548579017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d7fdd532-26fc-4206-b10a-0b4b374325ee,Namespace:default,Attempt:0,}"
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.590215696Z" level=info msg="connecting to shim ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22" address="unix:///run/containerd/s/90a4dcb9aee5c2553c6bcf4ab25b1c345ad9d47ddbabae5564e3927f515cf0e9" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.662814808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d7fdd532-26fc-4206-b10a-0b4b374325ee,Namespace:default,Attempt:0,} returns sandbox id \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\""
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.665041461Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.169105596Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.169864551Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.171245385Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173287130Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173932896Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.508842033s"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173979477Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.178964277Z" level=info msg="CreateContainer within sandbox \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.186511447Z" level=info msg="Container b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.192936543Z" level=info msg="CreateContainer within sandbox \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.193667118Z" level=info msg="StartContainer for \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.194787414Z" level=info msg="connecting to shim b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739" address="unix:///run/containerd/s/90a4dcb9aee5c2553c6bcf4ab25b1c345ad9d47ddbabae5564e3927f515cf0e9" protocol=ttrpc version=3
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.252843031Z" level=info msg="StartContainer for \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\" returns successfully"
	
	
	==> coredns [2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45872 - 15983 "HINFO IN 8088891501838170795.6232109609411617053. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032677784s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-053182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-053182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-053182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_54_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:54:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-053182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:55:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:55:01 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:55:01 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:55:01 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:55:01 +0000   Thu, 20 Nov 2025 20:55:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-053182
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                df9984e9-1f83-4a5b-8be1-3539de380cc3
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-m5kfb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-053182                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-sg6pg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-053182             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-053182    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-9dwtf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-053182             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-053182 event: Registered Node default-k8s-diff-port-053182 in Controller
	  Normal  NodeReady                12s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [41b22f3183abd53ebe2e09fc6299d3b3770e1bb5eb3f29a5656b87f782fa33fb] <==
	{"level":"warn","ts":"2025-11-20T20:54:40.780590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.793814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.798957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.806673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.816730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.827291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.837351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.844431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.852510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.861535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.869675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.877831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.886161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.894989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.906287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.914579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.921962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.929679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.937454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.945310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.951929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.967695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.976022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.983730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:41.043918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55680","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:55:13 up 37 min,  0 user,  load average: 3.49, 2.97, 2.05
	Linux default-k8s-diff-port-053182 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19aee05378ba4959ce80d9bc6b453ea4b001450c586e7a8637f4b6c82aa70dc1] <==
	I1120 20:54:50.885930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:54:50.886185       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 20:54:50.886312       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:54:50.971117       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:54:50.971160       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:54:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:54:51.183002       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:54:51.183070       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:54:51.183083       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:54:51.183236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:54:51.572047       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:54:51.572074       1 metrics.go:72] Registering metrics
	I1120 20:54:51.572124       1 controller.go:711] "Syncing nftables rules"
	I1120 20:55:01.186418       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:55:01.186494       1 main.go:301] handling current node
	I1120 20:55:11.185735       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:55:11.185774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3db382118c3056b2d8b4ed257f6012e141be4e2f391642de800c6c8b4308cdfa] <==
	I1120 20:54:41.724545       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:54:41.725146       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:54:41.727007       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:41.731744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:41.733202       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:54:41.734687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:54:41.753907       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 20:54:42.627832       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:54:42.637662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:54:42.637688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:54:43.109690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:54:43.148330       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:54:43.224826       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:54:43.231399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 20:54:43.232538       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:54:43.237023       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:54:43.686839       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:54:44.261473       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:54:44.270328       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:54:44.277866       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:54:49.541697       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:49.546335       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:49.740174       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:54:49.788764       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 20:55:12.387241       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:44870: use of closed network connection
	
	
	==> kube-controller-manager [860d0852403dfa81b9879b3a10cfdbf9452c81cf9849e45c8f5206f57d37b4a8] <==
	I1120 20:54:48.655074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:54:48.661967       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:54:48.669213       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:54:48.676518       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 20:54:48.685284       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:54:48.685331       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:54:48.685331       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:54:48.685377       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:54:48.685399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:54:48.685408       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:54:48.685416       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:54:48.685537       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:54:48.686835       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:54:48.686885       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:54:48.686933       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:54:48.686955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:54:48.686994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:54:48.687015       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 20:54:48.687418       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 20:54:48.687004       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:54:48.688056       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 20:54:48.689571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:54:48.689886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:48.706282       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:55:03.638274       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29cd300441f9d92488c4ced8b1bb62f46fcaa21732cc0c1ce556887d74710dbf] <==
	I1120 20:54:50.429459       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:54:50.503721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:54:50.604430       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:54:50.604472       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 20:54:50.604591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:54:50.632735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:54:50.632794       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:54:50.638627       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:54:50.638992       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:54:50.639026       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:54:50.640635       1 config.go:200] "Starting service config controller"
	I1120 20:54:50.640662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:54:50.640698       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:54:50.640694       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:54:50.640715       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:54:50.640704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:54:50.640749       1 config.go:309] "Starting node config controller"
	I1120 20:54:50.640759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:54:50.640766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:54:50.741574       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:54:50.741642       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:54:50.741656       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [10e893bdc3051a23e048f7f2812d625e1c495d7a3a82c593dd4edd7fbd1f5824] <==
	E1120 20:54:41.699382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:41.698950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:54:41.698993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:41.699023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:41.699519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:41.699768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:41.699853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:41.699913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:41.700201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:41.698873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:54:41.699629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:54:41.701036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:42.513986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:42.558339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:54:42.587770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:42.639008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:42.693875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:42.814661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:42.840916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:42.848009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:54:42.905725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:42.942934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:54:42.948432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:42.952645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1120 20:54:43.292177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: E1120 20:54:45.146522    1466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-053182\" already exists" pod="kube-system/etcd-default-k8s-diff-port-053182"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.182031    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-053182" podStartSLOduration=1.182009204 podStartE2EDuration="1.182009204s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.18199247 +0000 UTC m=+1.159898856" watchObservedRunningTime="2025-11-20 20:54:45.182009204 +0000 UTC m=+1.159915591"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.214618    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-053182" podStartSLOduration=1.214590759 podStartE2EDuration="1.214590759s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.195734202 +0000 UTC m=+1.173640586" watchObservedRunningTime="2025-11-20 20:54:45.214590759 +0000 UTC m=+1.192497149"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.224689    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-053182" podStartSLOduration=2.224668767 podStartE2EDuration="2.224668767s" podCreationTimestamp="2025-11-20 20:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.214807367 +0000 UTC m=+1.192713750" watchObservedRunningTime="2025-11-20 20:54:45.224668767 +0000 UTC m=+1.202575157"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.239988    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-053182" podStartSLOduration=1.239942503 podStartE2EDuration="1.239942503s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.224659978 +0000 UTC m=+1.202566364" watchObservedRunningTime="2025-11-20 20:54:45.239942503 +0000 UTC m=+1.217848887"
	Nov 20 20:54:48 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:48.661762    1466 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:54:48 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:48.662511    1466 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.840140    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-lib-modules\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.841352    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-xtables-lock\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.841422    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz99l\" (UniqueName: \"kubernetes.io/projected/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-kube-api-access-zz99l\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843069    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-lib-modules\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843111    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-cni-cfg\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843137    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-kube-proxy\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843158    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-xtables-lock\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843208    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9kn\" (UniqueName: \"kubernetes.io/projected/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-kube-api-access-8m9kn\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:51 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:51.172641    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dwtf" podStartSLOduration=2.17261842 podStartE2EDuration="2.17261842s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:51.161750952 +0000 UTC m=+7.139657341" watchObservedRunningTime="2025-11-20 20:54:51.17261842 +0000 UTC m=+7.150524809"
	Nov 20 20:54:51 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:51.184217    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sg6pg" podStartSLOduration=2.184192715 podStartE2EDuration="2.184192715s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:51.183993959 +0000 UTC m=+7.161900345" watchObservedRunningTime="2025-11-20 20:54:51.184192715 +0000 UTC m=+7.162099103"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.263549    1466 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326842    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47956acc-9579-4eb7-9d9f-a6e82239fcd8-tmp\") pod \"storage-provisioner\" (UID: \"47956acc-9579-4eb7-9d9f-a6e82239fcd8\") " pod="kube-system/storage-provisioner"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326903    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g59m\" (UniqueName: \"kubernetes.io/projected/47956acc-9579-4eb7-9d9f-a6e82239fcd8-kube-api-access-9g59m\") pod \"storage-provisioner\" (UID: \"47956acc-9579-4eb7-9d9f-a6e82239fcd8\") " pod="kube-system/storage-provisioner"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326946    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7af76736-ef8a-434f-ad0c-b52641f9f02d-config-volume\") pod \"coredns-66bc5c9577-m5kfb\" (UID: \"7af76736-ef8a-434f-ad0c-b52641f9f02d\") " pod="kube-system/coredns-66bc5c9577-m5kfb"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326968    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvq5q\" (UniqueName: \"kubernetes.io/projected/7af76736-ef8a-434f-ad0c-b52641f9f02d-kube-api-access-zvq5q\") pod \"coredns-66bc5c9577-m5kfb\" (UID: \"7af76736-ef8a-434f-ad0c-b52641f9f02d\") " pod="kube-system/coredns-66bc5c9577-m5kfb"
	Nov 20 20:55:02 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:02.183864    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.183844095 podStartE2EDuration="12.183844095s" podCreationTimestamp="2025-11-20 20:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:55:02.183557992 +0000 UTC m=+18.161464380" watchObservedRunningTime="2025-11-20 20:55:02.183844095 +0000 UTC m=+18.161750481"
	Nov 20 20:55:02 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:02.193120    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m5kfb" podStartSLOduration=13.193100271 podStartE2EDuration="13.193100271s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:55:02.192763829 +0000 UTC m=+18.170670214" watchObservedRunningTime="2025-11-20 20:55:02.193100271 +0000 UTC m=+18.171006656"
	Nov 20 20:55:04 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:04.343774    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55jv\" (UniqueName: \"kubernetes.io/projected/d7fdd532-26fc-4206-b10a-0b4b374325ee-kube-api-access-z55jv\") pod \"busybox\" (UID: \"d7fdd532-26fc-4206-b10a-0b4b374325ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837] <==
	I1120 20:55:01.776262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:55:01.784951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:55:01.785000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:55:01.787337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.792801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:55:01.793124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:55:01.793181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba78d1b6-3e16-4f55-a5b7-7575bdeabcc4", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf became leader
	I1120 20:55:01.793303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf!
	W1120 20:55:01.795391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.800399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:55:01.893589       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf!
	W1120 20:55:03.803513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.809123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.812114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.816531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:07.823227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:07.829937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:09.834109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:09.839142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:11.842953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:11.847439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:13.851139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:13.855335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-053182
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-053182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab",
	        "Created": "2025-11-20T20:54:27.695157679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:54:27.734457835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/hostname",
	        "HostsPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/hosts",
	        "LogPath": "/var/lib/docker/containers/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab/963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab-json.log",
	        "Name": "/default-k8s-diff-port-053182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-053182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-053182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "963627da0e76ef5e5cbc9378eaf40cefb4f32c0658a6e69d7b47df7b412cbfab",
	                "LowerDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a-init/diff:/var/lib/docker/overlay2/b8e13cfd95c92c89e06ea4ca61f150e2b9e9586529048197192d1a83648ef8cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55e5f6b5ca700e8cb83aaf4f3e862bb714728d9a772d402f94e3fe4379c0961a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-053182",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-053182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-053182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-053182",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-053182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "614313afd378b4568997eaf040b0cdf2f33329765d4a8b736a177852cdfd97f6",
	            "SandboxKey": "/var/run/docker/netns/614313afd378",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-053182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f214e15c73bb9c6c638c72095a989fd20575dded2cc6854dc6057351fd56bb9",
	                    "EndpointID": "e5eba6478597fd9e4de5cb5d1ddd50d38f1c208068b4c53a5730a85020143776",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "be:bf:c1:12:e9:7e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-053182",
	                        "963627da0e76"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-053182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-053182 logs -n 25: (1.109389178s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p old-k8s-version-715005 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p old-k8s-version-715005                                                                                                                                                                                                                           │ old-k8s-version-715005       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ image   │ no-preload-480337 image list --format=json                                                                                                                                                                                                          │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ pause   │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ unpause │ -p no-preload-480337 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p no-preload-480337                                                                                                                                                                                                                                │ no-preload-480337            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ delete  │ -p disable-driver-mounts-311936                                                                                                                                                                                                                     │ disable-driver-mounts-311936 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-053182 │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:55 UTC │
	│ delete  │ -p cert-expiration-137718                                                                                                                                                                                                                           │ cert-expiration-137718       │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-439796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ stop    │ -p newest-cni-439796 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-439796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:54 UTC │
	│ start   │ -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:54 UTC │ 20 Nov 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-954820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ stop    │ -p embed-certs-954820 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-954820           │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │                     │
	│ image   │ newest-cni-439796 image list --format=json                                                                                                                                                                                                          │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ pause   │ -p newest-cni-439796 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ unpause │ -p newest-cni-439796 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │ 20 Nov 25 20:55 UTC │
	│ delete  │ -p newest-cni-439796                                                                                                                                                                                                                                │ newest-cni-439796            │ jenkins │ v1.37.0 │ 20 Nov 25 20:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:54:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:54:59.857828  278240 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:54:59.858105  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858115  278240 out.go:374] Setting ErrFile to fd 2...
	I1120 20:54:59.858119  278240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:54:59.858349  278240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:54:59.858826  278240 out.go:368] Setting JSON to false
	I1120 20:54:59.860194  278240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1763669848,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:54:59.860277  278240 start.go:143] virtualization: kvm guest
	I1120 20:54:59.862251  278240 out.go:179] * [newest-cni-439796] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:54:59.863664  278240 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:54:59.863667  278240 notify.go:221] Checking for updates...
	I1120 20:54:59.864889  278240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:54:59.866102  278240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:54:59.867392  278240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:54:59.868550  278240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:54:59.869682  278240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:54:59.871457  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:54:59.871972  278240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:54:59.895937  278240 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:54:59.896024  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:54:59.953310  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:54:59.943244297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:54:59.953450  278240 docker.go:319] overlay module found
	I1120 20:54:59.955196  278240 out.go:179] * Using the docker driver based on existing profile
	I1120 20:54:59.956312  278240 start.go:309] selected driver: docker
	I1120 20:54:59.956329  278240 start.go:930] validating driver "docker" against &{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:54:59.956444  278240 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:54:59.956970  278240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:55:00.019097  278240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:55:00.008082303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:55:00.019426  278240 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:00.019462  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:00.019528  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:00.019596  278240 start.go:353] cluster config:
	{Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:00.021170  278240 out.go:179] * Starting "newest-cni-439796" primary control-plane node in "newest-cni-439796" cluster
	I1120 20:55:00.022241  278240 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:55:00.023448  278240 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:55:00.024648  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:00.024678  278240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1120 20:55:00.024688  278240 cache.go:65] Caching tarball of preloaded images
	I1120 20:55:00.024751  278240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:55:00.024781  278240 preload.go:238] Found /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1120 20:55:00.024793  278240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 20:55:00.024892  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.047349  278240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:55:00.047385  278240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:55:00.047421  278240 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:55:00.047453  278240 start.go:360] acquireMachinesLock for newest-cni-439796: {Name:mkd377b5021ac8b488b2c648334cf58462a4dda8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:55:00.047519  278240 start.go:364] duration metric: took 41.671µs to acquireMachinesLock for "newest-cni-439796"
	I1120 20:55:00.047542  278240 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:55:00.047552  278240 fix.go:54] fixHost starting: 
	I1120 20:55:00.047793  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.066752  278240 fix.go:112] recreateIfNeeded on newest-cni-439796: state=Stopped err=<nil>
	W1120 20:55:00.066782  278240 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 20:54:59.168958  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	W1120 20:55:01.169101  267938 node_ready.go:57] node "default-k8s-diff-port-053182" has "Ready":"False" status (will retry)
	I1120 20:55:01.669497  267938 node_ready.go:49] node "default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:01.669530  267938 node_ready.go:38] duration metric: took 11.503696878s for node "default-k8s-diff-port-053182" to be "Ready" ...
	I1120 20:55:01.669547  267938 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:01.669608  267938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:01.684444  267938 api_server.go:72] duration metric: took 11.853641818s to wait for apiserver process to appear ...
	I1120 20:55:01.684479  267938 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:01.684517  267938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1120 20:55:01.690782  267938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1120 20:55:01.691893  267938 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:01.691922  267938 api_server.go:131] duration metric: took 7.434681ms to wait for apiserver health ...
	I1120 20:55:01.691934  267938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:01.695775  267938 system_pods.go:59] 8 kube-system pods found
	I1120 20:55:01.695832  267938 system_pods.go:61] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.695845  267938 system_pods.go:61] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.695858  267938 system_pods.go:61] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.695873  267938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.695882  267938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.695888  267938 system_pods.go:61] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.695897  267938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.695905  267938 system_pods.go:61] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.695917  267938 system_pods.go:74] duration metric: took 3.975656ms to wait for pod list to return data ...
	I1120 20:55:01.695931  267938 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:01.702110  267938 default_sa.go:45] found service account: "default"
	I1120 20:55:01.702135  267938 default_sa.go:55] duration metric: took 6.196385ms for default service account to be created ...
	I1120 20:55:01.702146  267938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:55:01.796537  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:01.796576  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:01.796585  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:01.796599  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:01.796605  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:01.796610  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:01.796621  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:01.796626  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:01.796634  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:01.796670  267938 retry.go:31] will retry after 230.554359ms: missing components: kube-dns
	I1120 20:55:02.032424  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.032457  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:55:02.032465  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.032474  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.032479  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.032484  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.032489  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.032493  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.032500  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:55:02.032519  267938 retry.go:31] will retry after 327.025815ms: missing components: kube-dns
	I1120 20:55:02.365222  267938 system_pods.go:86] 8 kube-system pods found
	I1120 20:55:02.365305  267938 system_pods.go:89] "coredns-66bc5c9577-m5kfb" [7af76736-ef8a-434f-ad0c-b52641f9f02d] Running
	I1120 20:55:02.365316  267938 system_pods.go:89] "etcd-default-k8s-diff-port-053182" [bd91f04b-5f3e-4a56-9854-44217a3e84c4] Running
	I1120 20:55:02.365326  267938 system_pods.go:89] "kindnet-sg6pg" [1f060cb7-fe2e-40da-b620-0ae4ab1b46ca] Running
	I1120 20:55:02.365334  267938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-053182" [233f521c-596a-48b5-a075-6f7047f8681e] Running
	I1120 20:55:02.365351  267938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-053182" [b8030abf-6545-401c-9be1-ff6d1e183855] Running
	I1120 20:55:02.365357  267938 system_pods.go:89] "kube-proxy-9dwtf" [f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3] Running
	I1120 20:55:02.365363  267938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-053182" [e69e64b5-7879-46ff-9920-5090e462be17] Running
	I1120 20:55:02.365394  267938 system_pods.go:89] "storage-provisioner" [47956acc-9579-4eb7-9d9f-a6e82239fcd8] Running
	I1120 20:55:02.365405  267938 system_pods.go:126] duration metric: took 663.251244ms to wait for k8s-apps to be running ...
	I1120 20:55:02.365435  267938 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:55:02.365836  267938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:55:02.379259  267938 system_svc.go:56] duration metric: took 13.837433ms WaitForService to wait for kubelet
	I1120 20:55:02.379293  267938 kubeadm.go:587] duration metric: took 12.548497918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:55:02.379319  267938 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:02.382189  267938 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:02.382219  267938 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:02.382231  267938 node_conditions.go:105] duration metric: took 2.905948ms to run NodePressure ...
	I1120 20:55:02.382244  267938 start.go:242] waiting for startup goroutines ...
	I1120 20:55:02.382254  267938 start.go:247] waiting for cluster config update ...
	I1120 20:55:02.382269  267938 start.go:256] writing updated cluster config ...
	I1120 20:55:02.382592  267938 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:02.386235  267938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:02.389651  267938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.393726  267938 pod_ready.go:94] pod "coredns-66bc5c9577-m5kfb" is "Ready"
	I1120 20:55:02.393745  267938 pod_ready.go:86] duration metric: took 4.074153ms for pod "coredns-66bc5c9577-m5kfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.395689  267938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.399316  267938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.399335  267938 pod_ready.go:86] duration metric: took 3.628858ms for pod "etcd-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.401248  267938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.404743  267938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.404759  267938 pod_ready.go:86] duration metric: took 3.496456ms for pod "kube-apiserver-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.406414  267938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.790539  267938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:02.790573  267938 pod_ready.go:86] duration metric: took 384.138389ms for pod "kube-controller-manager-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:02.990773  267938 pod_ready.go:83] waiting for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.390942  267938 pod_ready.go:94] pod "kube-proxy-9dwtf" is "Ready"
	I1120 20:55:03.390966  267938 pod_ready.go:86] duration metric: took 400.162298ms for pod "kube-proxy-9dwtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.591644  267938 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990591  267938 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-053182" is "Ready"
	I1120 20:55:03.990620  267938 pod_ready.go:86] duration metric: took 398.945663ms for pod "kube-scheduler-default-k8s-diff-port-053182" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:55:03.990634  267938 pod_ready.go:40] duration metric: took 1.604373018s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:55:04.040872  267938 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:55:04.046253  267938 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-053182" cluster and "default" namespace by default
	I1120 20:55:00.068525  278240 out.go:252] * Restarting existing docker container for "newest-cni-439796" ...
	I1120 20:55:00.068597  278240 cli_runner.go:164] Run: docker start newest-cni-439796
	I1120 20:55:00.341240  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:00.361218  278240 kic.go:430] container "newest-cni-439796" state is running.
	I1120 20:55:00.361592  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:00.380436  278240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/config.json ...
	I1120 20:55:00.380646  278240 machine.go:94] provisionDockerMachine start ...
	I1120 20:55:00.380703  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:00.399740  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:00.399992  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:00.400005  278240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:55:00.400638  278240 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44044->127.0.0.1:33094: read: connection reset by peer
	I1120 20:55:03.537357  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.537416  278240 ubuntu.go:182] provisioning hostname "newest-cni-439796"
	I1120 20:55:03.537490  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.564681  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.565007  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.565025  278240 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-439796 && echo "newest-cni-439796" | sudo tee /etc/hostname
	I1120 20:55:03.714348  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-439796
	
	I1120 20:55:03.714449  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:03.733081  278240 main.go:143] libmachine: Using SSH client type: native
	I1120 20:55:03.733307  278240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1120 20:55:03.733326  278240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439796/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:55:03.870069  278240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:55:03.870099  278240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3769/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3769/.minikube}
	I1120 20:55:03.870136  278240 ubuntu.go:190] setting up certificates
	I1120 20:55:03.870148  278240 provision.go:84] configureAuth start
	I1120 20:55:03.870204  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:03.888998  278240 provision.go:143] copyHostCerts
	I1120 20:55:03.889072  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem, removing ...
	I1120 20:55:03.889086  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem
	I1120 20:55:03.889169  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/ca.pem (1082 bytes)
	I1120 20:55:03.889364  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem, removing ...
	I1120 20:55:03.889391  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem
	I1120 20:55:03.889436  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/cert.pem (1123 bytes)
	I1120 20:55:03.889525  278240 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem, removing ...
	I1120 20:55:03.889536  278240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem
	I1120 20:55:03.889569  278240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3769/.minikube/key.pem (1679 bytes)
	I1120 20:55:03.889647  278240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439796 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-439796]
	I1120 20:55:04.066966  278240 provision.go:177] copyRemoteCerts
	I1120 20:55:04.067036  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:55:04.067080  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.090856  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.196925  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:55:04.217358  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 20:55:04.242617  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:55:04.262514  278240 provision.go:87] duration metric: took 392.354465ms to configureAuth
	I1120 20:55:04.262545  278240 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:55:04.262716  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:55:04.262727  278240 machine.go:97] duration metric: took 3.882068475s to provisionDockerMachine
	I1120 20:55:04.262735  278240 start.go:293] postStartSetup for "newest-cni-439796" (driver="docker")
	I1120 20:55:04.262744  278240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:55:04.262787  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:55:04.262830  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.283586  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.382700  278240 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:55:04.386689  278240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:55:04.386720  278240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:55:04.386734  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/addons for local assets ...
	I1120 20:55:04.386784  278240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3769/.minikube/files for local assets ...
	I1120 20:55:04.386890  278240 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem -> 77312.pem in /etc/ssl/certs
	I1120 20:55:04.387094  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:55:04.395171  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:55:04.412782  278240 start.go:296] duration metric: took 150.034316ms for postStartSetup
	I1120 20:55:04.412864  278240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:55:04.412910  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.433695  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.530336  278240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:55:04.535206  278240 fix.go:56] duration metric: took 4.48764827s for fixHost
	I1120 20:55:04.535232  278240 start.go:83] releasing machines lock for "newest-cni-439796", held for 4.487699701s
	I1120 20:55:04.535302  278240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-439796
	I1120 20:55:04.557073  278240 ssh_runner.go:195] Run: cat /version.json
	I1120 20:55:04.557151  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.557181  278240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:55:04.557249  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:04.579766  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.580774  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:04.679945  278240 ssh_runner.go:195] Run: systemctl --version
	I1120 20:55:04.743090  278240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:55:04.748524  278240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:55:04.748593  278240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:55:04.757428  278240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:55:04.757454  278240 start.go:496] detecting cgroup driver to use...
	I1120 20:55:04.757485  278240 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:55:04.757548  278240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 20:55:04.776538  278240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 20:55:04.791147  278240 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:55:04.791216  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:55:04.809821  278240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:55:04.824474  278240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:55:04.915359  278240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:55:05.005773  278240 docker.go:234] disabling docker service ...
	I1120 20:55:05.005848  278240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:55:05.022479  278240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:55:05.035295  278240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:55:05.127413  278240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:55:05.222594  278240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:55:05.237063  278240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:55:05.255195  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 20:55:05.265033  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 20:55:05.275404  278240 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1120 20:55:05.275476  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1120 20:55:05.286052  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:55:05.295782  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 20:55:05.304979  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 20:55:05.314472  278240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:55:05.323167  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 20:55:05.332745  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 20:55:05.342479  278240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 20:55:05.351858  278240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:55:05.359745  278240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:55:05.367752  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:05.471088  278240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 20:55:05.591585  278240 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 20:55:05.591681  278240 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 20:55:05.596088  278240 start.go:564] Will wait 60s for crictl version
	I1120 20:55:05.596147  278240 ssh_runner.go:195] Run: which crictl
	I1120 20:55:05.600407  278240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:55:05.629326  278240 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 20:55:05.629405  278240 ssh_runner.go:195] Run: containerd --version
	I1120 20:55:05.655318  278240 ssh_runner.go:195] Run: containerd --version
	I1120 20:55:05.684274  278240 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 20:55:05.685537  278240 cli_runner.go:164] Run: docker network inspect newest-cni-439796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:55:05.705933  278240 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1120 20:55:05.710592  278240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:55:05.723139  278240 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 20:55:05.724415  278240 kubeadm.go:884] updating cluster {Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:55:05.724553  278240 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:55:05.724612  278240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:55:05.751395  278240 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 20:55:05.751418  278240 containerd.go:534] Images already preloaded, skipping extraction
	I1120 20:55:05.751465  278240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:55:05.779235  278240 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 20:55:05.779260  278240 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:55:05.779269  278240 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1120 20:55:05.779416  278240 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-439796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:55:05.779488  278240 ssh_runner.go:195] Run: sudo crictl info
	I1120 20:55:05.807554  278240 cni.go:84] Creating CNI manager for ""
	I1120 20:55:05.807573  278240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:55:05.807589  278240 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 20:55:05.807612  278240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-439796 NodeName:newest-cni-439796 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:55:05.807739  278240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-439796"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:55:05.807802  278240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:55:05.817304  278240 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:55:05.817359  278240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:55:05.825631  278240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 20:55:05.840583  278240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:55:05.854420  278240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1120 20:55:05.869219  278240 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:55:05.873465  278240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:55:05.884129  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:05.974696  278240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:55:05.997065  278240 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796 for IP: 192.168.94.2
	I1120 20:55:05.997089  278240 certs.go:195] generating shared ca certs ...
	I1120 20:55:05.997109  278240 certs.go:227] acquiring lock for ca certs: {Name:mk775617087d2732283088aad08819408765453b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:05.997270  278240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key
	I1120 20:55:05.997317  278240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key
	I1120 20:55:05.997332  278240 certs.go:257] generating profile certs ...
	I1120 20:55:05.997481  278240 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/client.key
	I1120 20:55:05.997548  278240 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.key.2ac9c80b
	I1120 20:55:05.997601  278240 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.key
	I1120 20:55:05.997753  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem (1338 bytes)
	W1120 20:55:05.997800  278240 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731_empty.pem, impossibly tiny 0 bytes
	I1120 20:55:05.997813  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:55:05.997848  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:55:05.997903  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:55:05.997935  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/certs/key.pem (1679 bytes)
	I1120 20:55:05.997975  278240 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem (1708 bytes)
	I1120 20:55:05.998947  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:55:06.022257  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:55:06.047956  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:55:06.072926  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:55:06.099945  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 20:55:06.127812  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:55:06.152822  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:55:06.173240  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/newest-cni-439796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:55:06.194867  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/ssl/certs/77312.pem --> /usr/share/ca-certificates/77312.pem (1708 bytes)
	I1120 20:55:06.217850  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:55:06.242132  278240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3769/.minikube/certs/7731.pem --> /usr/share/ca-certificates/7731.pem (1338 bytes)
	I1120 20:55:06.263764  278240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:55:06.277765  278240 ssh_runner.go:195] Run: openssl version
	I1120 20:55:06.285624  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.294684  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77312.pem /etc/ssl/certs/77312.pem
	I1120 20:55:06.303738  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.308259  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:26 /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.308323  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77312.pem
	I1120 20:55:06.346444  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:55:06.354642  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.363400  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:55:06.371708  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.376138  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.376194  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:55:06.415213  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:55:06.423811  278240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.432016  278240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7731.pem /etc/ssl/certs/7731.pem
	I1120 20:55:06.440143  278240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.444748  278240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:26 /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.444813  278240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7731.pem
	I1120 20:55:06.482317  278240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:55:06.491206  278240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:55:06.495446  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:55:06.534473  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:55:06.588509  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:55:06.643817  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:55:06.703005  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:55:06.769940  278240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:55:06.836316  278240 kubeadm.go:401] StartCluster: {Name:newest-cni-439796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-439796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:55:06.836446  278240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 20:55:06.836516  278240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:55:06.885298  278240 cri.go:89] found id: "49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9"
	I1120 20:55:06.885324  278240 cri.go:89] found id: "b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43"
	I1120 20:55:06.885330  278240 cri.go:89] found id: "fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4"
	I1120 20:55:06.885335  278240 cri.go:89] found id: "89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406"
	I1120 20:55:06.885339  278240 cri.go:89] found id: "680d5ea55c3a1bcffab71661dcad66887fd3065ef54ae42dce5a22da37d85503"
	I1120 20:55:06.885344  278240 cri.go:89] found id: "b4b9911a652a9a0aab927183a3e56fa355872a9d79a72a255ac6a54f8ca414fd"
	I1120 20:55:06.885348  278240 cri.go:89] found id: "8fea5b58894fc92f826d414aa12f8a7b0531f4c497f699fd75d9676afa9f3b9c"
	I1120 20:55:06.885351  278240 cri.go:89] found id: "519979b0715f31a7d1ff9784de4371f78b61b8ce78aa037985a3206e5ebeff15"
	I1120 20:55:06.885355  278240 cri.go:89] found id: "5ff2d4262b7871a5f88a225f6d65dfba458b597ec7a310b7f50f56640e7e4845"
	I1120 20:55:06.885364  278240 cri.go:89] found id: "8c135e548e60296ebe8b92267fc334cc7f2086e45cea67ae14ad02b9bcc16a01"
	I1120 20:55:06.885378  278240 cri.go:89] found id: ""
	I1120 20:55:06.885427  278240 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1120 20:55:06.912701  278240 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9","pid":986,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9/rootfs","created":"2025-11-20T20:55:06.791401287Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"20142ccb0fa290322b21529c3fee9f5d"},"owner":"root"},{"ociVersion":"1.2.
1","id":"5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59/rootfs","created":"2025-11-20T20:55:06.641024515Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-439796_b1a9c49c8334b79aea52840a4e22a3ee","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-439796","io
.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b1a9c49c8334b79aea52840a4e22a3ee"},"owner":"root"},{"ociVersion":"1.2.1","id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","pid":866,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1/rootfs","created":"2025-11-20T20:55:06.645183115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-n
ewest-cni-439796_20142ccb0fa290322b21529c3fee9f5d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"20142ccb0fa290322b21529c3fee9f5d"},"owner":"root"},{"ociVersion":"1.2.1","id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","pid":810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196/rootfs","created":"2025-11-20T20:55:06.613077257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.san
dbox-id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-439796_302580d78efe025b0c5d637fd2421ce8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"302580d78efe025b0c5d637fd2421ce8"},"owner":"root"},{"ociVersion":"1.2.1","id":"89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406/rootfs","created":"2025-11-20T20:55:06.764071056Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3
.6.4-0","io.kubernetes.cri.sandbox-id":"7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"302580d78efe025b0c5d637fd2421ce8"},"owner":"root"},{"ociVersion":"1.2.1","id":"b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43/rootfs","created":"2025-11-20T20:55:06.779306658Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","io.kuber
netes.cri.sandbox-name":"kube-scheduler-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"669024f429d0435f565278cbf491faff"},"owner":"root"},{"ociVersion":"1.2.1","id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54/rootfs","created":"2025-11-20T20:55:06.650187729Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54","io.kubernetes.cri.sandbox-log-d
irectory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-439796_669024f429d0435f565278cbf491faff","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"669024f429d0435f565278cbf491faff"},"owner":"root"},{"ociVersion":"1.2.1","id":"fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4","pid":972,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4/rootfs","created":"2025-11-20T20:55:06.785297071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"5325fb115be6f954972b126cb1c8
3e40b17960f3a320d12907ac86451b2f7e59","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-439796","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b1a9c49c8334b79aea52840a4e22a3ee"},"owner":"root"}]
	I1120 20:55:06.912936  278240 cri.go:126] list returned 8 containers
	I1120 20:55:06.912952  278240 cri.go:129] container: {ID:49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9 Status:running}
	I1120 20:55:06.912975  278240 cri.go:135] skipping {49347eda0182605dd4ef7cf7d0cf00f85c7a5abafe0b8c60126fc160cc70b0a9 running}: state = "running", want "paused"
	I1120 20:55:06.912996  278240 cri.go:129] container: {ID:5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59 Status:running}
	I1120 20:55:06.913006  278240 cri.go:131] skipping 5325fb115be6f954972b126cb1c83e40b17960f3a320d12907ac86451b2f7e59 - not in ps
	I1120 20:55:06.913013  278240 cri.go:129] container: {ID:5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1 Status:running}
	I1120 20:55:06.913023  278240 cri.go:131] skipping 5991f4f03bd7b8ee06d8e5994261f6bbbb4946baed62b4cc417c5c72a2b67bb1 - not in ps
	I1120 20:55:06.913029  278240 cri.go:129] container: {ID:7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196 Status:running}
	I1120 20:55:06.913036  278240 cri.go:131] skipping 7e44b7ce6cb576ae9115f4911a14668aa89687a7c3e2ea1d5b2035a443b72196 - not in ps
	I1120 20:55:06.913041  278240 cri.go:129] container: {ID:89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406 Status:running}
	I1120 20:55:06.913049  278240 cri.go:135] skipping {89ffc1b6476d847109828e6cd3c5db9ee0dbadcd2674eabcdbab71491b20f406 running}: state = "running", want "paused"
	I1120 20:55:06.913055  278240 cri.go:129] container: {ID:b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43 Status:running}
	I1120 20:55:06.913062  278240 cri.go:135] skipping {b29f9a9559f5f73f8ff7f7d1deafc123f8b9b493527b013f62aedb2e79ac8e43 running}: state = "running", want "paused"
	I1120 20:55:06.913068  278240 cri.go:129] container: {ID:bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54 Status:running}
	I1120 20:55:06.913077  278240 cri.go:131] skipping bfdb526fd94d494c784332950704c7c1902008b6b4e3a059cee3d9361c0f7f54 - not in ps
	I1120 20:55:06.913086  278240 cri.go:129] container: {ID:fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4 Status:running}
	I1120 20:55:06.913095  278240 cri.go:135] skipping {fd8955c6a55abdf0fbec0325f092e2e88f77a629f1b1f1c9853794b138bf33e4 running}: state = "running", want "paused"
	I1120 20:55:06.913145  278240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:55:06.921808  278240 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 20:55:06.921828  278240 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 20:55:06.921874  278240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 20:55:06.929779  278240 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:55:06.931066  278240 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-439796" does not appear in /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:55:06.932018  278240 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-3769/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-439796" cluster setting kubeconfig missing "newest-cni-439796" context setting]
	I1120 20:55:06.933358  278240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.935078  278240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 20:55:06.943760  278240 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1120 20:55:06.943796  278240 kubeadm.go:602] duration metric: took 21.961753ms to restartPrimaryControlPlane
	I1120 20:55:06.943806  278240 kubeadm.go:403] duration metric: took 107.500823ms to StartCluster
	I1120 20:55:06.943825  278240 settings.go:142] acquiring lock: {Name:mkd78c1a946fc1da0bff0b049ee93f62b6457c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.943892  278240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:55:06.946094  278240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3769/kubeconfig: {Name:mk92246a312eabd67c28c34f15135551d85e2541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:55:06.946312  278240 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 20:55:06.946438  278240 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:55:06.946538  278240 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-439796"
	I1120 20:55:06.946556  278240 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-439796"
	W1120 20:55:06.946565  278240 addons.go:248] addon storage-provisioner should already be in state true
	I1120 20:55:06.946593  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.946618  278240 config.go:182] Loaded profile config "newest-cni-439796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:55:06.946667  278240 addons.go:70] Setting default-storageclass=true in profile "newest-cni-439796"
	I1120 20:55:06.946678  278240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-439796"
	I1120 20:55:06.946897  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.946972  278240 addons.go:70] Setting metrics-server=true in profile "newest-cni-439796"
	I1120 20:55:06.946995  278240 addons.go:239] Setting addon metrics-server=true in "newest-cni-439796"
	W1120 20:55:06.947004  278240 addons.go:248] addon metrics-server should already be in state true
	I1120 20:55:06.947044  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.947068  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947240  278240 addons.go:70] Setting dashboard=true in profile "newest-cni-439796"
	I1120 20:55:06.947255  278240 addons.go:239] Setting addon dashboard=true in "newest-cni-439796"
	W1120 20:55:06.947263  278240 addons.go:248] addon dashboard should already be in state true
	I1120 20:55:06.947284  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.947519  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947766  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.947975  278240 out.go:179] * Verifying Kubernetes components...
	I1120 20:55:06.949539  278240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:55:06.975930  278240 addons.go:239] Setting addon default-storageclass=true in "newest-cni-439796"
	W1120 20:55:06.976504  278240 addons.go:248] addon default-storageclass should already be in state true
	I1120 20:55:06.976588  278240 host.go:66] Checking if "newest-cni-439796" exists ...
	I1120 20:55:06.977154  278240 cli_runner.go:164] Run: docker container inspect newest-cni-439796 --format={{.State.Status}}
	I1120 20:55:06.979879  278240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:55:06.981133  278240 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 20:55:06.981150  278240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:55:06.982302  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:55:06.982362  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:06.984348  278240 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 20:55:06.985351  278240 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1120 20:55:06.985403  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 20:55:06.985439  278240 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 20:55:06.985486  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:06.986410  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:55:06.986430  278240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:55:06.986485  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:07.024539  278240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:55:07.024564  278240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:55:07.024628  278240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-439796
	I1120 20:55:07.038061  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.038122  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.045334  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.077937  278240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/newest-cni-439796/id_rsa Username:docker}
	I1120 20:55:07.159586  278240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:55:07.173182  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:55:07.180976  278240 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:55:07.181133  278240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:55:07.197659  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:55:07.197684  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1120 20:55:07.214817  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 20:55:07.214847  278240 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 20:55:07.219698  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:55:07.236698  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:55:07.236739  278240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:55:07.238962  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 20:55:07.238988  278240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 20:55:07.261316  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 20:55:07.261398  278240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 20:55:07.263680  278240 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:55:07.263699  278240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:55:07.283693  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:55:07.284024  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 20:55:07.284046  278240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 20:55:07.310201  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 20:55:07.310240  278240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 20:55:07.327656  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 20:55:07.327679  278240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 20:55:07.345555  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 20:55:07.345581  278240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 20:55:07.361901  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 20:55:07.361931  278240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 20:55:07.377645  278240 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 20:55:07.377675  278240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 20:55:07.393648  278240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 20:55:09.287360  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.11414184s)
	I1120 20:55:09.287460  278240 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.106306477s)
	I1120 20:55:09.287499  278240 api_server.go:72] duration metric: took 2.341156335s to wait for apiserver process to appear ...
	I1120 20:55:09.287502  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067775465s)
	I1120 20:55:09.287510  278240 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:55:09.287531  278240 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 20:55:09.287598  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.003872297s)
	I1120 20:55:09.287629  278240 addons.go:480] Verifying addon metrics-server=true in "newest-cni-439796"
	I1120 20:55:09.287713  278240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.894023771s)
	I1120 20:55:09.289364  278240 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-439796 addons enable metrics-server
	
	I1120 20:55:09.295012  278240 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 20:55:09.295051  278240 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 20:55:09.301006  278240 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1120 20:55:09.062184  231112 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060457942s)
	W1120 20:55:09.062238  231112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1120 20:55:09.062249  231112 logs.go:123] Gathering logs for kube-apiserver [cb7769bf1648b74c1a546d0f3e756ef05dafac966c3de96e017408ab4cd99787] ...
	I1120 20:55:09.062265  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7769bf1648b74c1a546d0f3e756ef05dafac966c3de96e017408ab4cd99787"
	I1120 20:55:09.113136  231112 logs.go:123] Gathering logs for kube-apiserver [db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2] ...
	I1120 20:55:09.113176  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 db00732a90f8c6d70acc941ae3bbac6147f57f0981a2c6e08b460374f8ff03d2"
	I1120 20:55:09.171511  231112 logs.go:123] Gathering logs for kube-scheduler [0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25] ...
	I1120 20:55:09.171552  231112 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0da6494bbfe7b9edac15def12ca9b9380f57b88a75e7babb5e74e1f6a49fff25"
	I1120 20:55:09.302312  278240 addons.go:515] duration metric: took 2.35588774s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1120 20:55:09.788644  278240 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 20:55:09.793157  278240 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1120 20:55:09.794197  278240 api_server.go:141] control plane version: v1.34.1
	I1120 20:55:09.794224  278240 api_server.go:131] duration metric: took 506.70699ms to wait for apiserver health ...
	I1120 20:55:09.794233  278240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:55:09.797847  278240 system_pods.go:59] 9 kube-system pods found
	I1120 20:55:09.797876  278240 system_pods.go:61] "coredns-66bc5c9577-tq44x" [ff948205-df7c-4ef9-9f5c-477b2f9bd6c8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797883  278240 system_pods.go:61] "etcd-newest-cni-439796" [738bd57a-0cd4-4a8d-93f1-abf8fc4d015c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:55:09.797890  278240 system_pods.go:61] "kindnet-9l2rj" [34d86602-3732-4a7c-9dec-c38291019e51] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:55:09.797899  278240 system_pods.go:61] "kube-apiserver-newest-cni-439796" [41ca83b9-690c-49f2-b682-7c1260206c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:55:09.797904  278240 system_pods.go:61] "kube-controller-manager-newest-cni-439796" [2894784f-75ab-43dd-a891-4fd2db248b92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:55:09.797910  278240 system_pods.go:61] "kube-proxy-7vwkv" [b571700e-d4d8-4498-a70a-51e436c9b877] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:55:09.797922  278240 system_pods.go:61] "kube-scheduler-newest-cni-439796" [effcf967-37db-4b3d-b1cb-6faa7b0bc180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:55:09.797929  278240 system_pods.go:61] "metrics-server-746fcd58dc-h7b8q" [4129553e-525e-4b8e-91d1-a0b08db35488] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797934  278240 system_pods.go:61] "storage-provisioner" [9e5aa1c1-be78-4b77-a920-b640b885d141] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 20:55:09.797945  278240 system_pods.go:74] duration metric: took 3.705397ms to wait for pod list to return data ...
	I1120 20:55:09.797951  278240 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:55:09.800263  278240 default_sa.go:45] found service account: "default"
	I1120 20:55:09.800281  278240 default_sa.go:55] duration metric: took 2.321435ms for default service account to be created ...
	I1120 20:55:09.800291  278240 kubeadm.go:587] duration metric: took 2.853949947s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 20:55:09.800306  278240 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:55:09.802485  278240 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:55:09.802507  278240 node_conditions.go:123] node cpu capacity is 8
	I1120 20:55:09.802517  278240 node_conditions.go:105] duration metric: took 2.206731ms to run NodePressure ...
	I1120 20:55:09.802527  278240 start.go:242] waiting for startup goroutines ...
	I1120 20:55:09.802534  278240 start.go:247] waiting for cluster config update ...
	I1120 20:55:09.802544  278240 start.go:256] writing updated cluster config ...
	I1120 20:55:09.802796  278240 ssh_runner.go:195] Run: rm -f paused
	I1120 20:55:09.862839  278240 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:55:09.864650  278240 out.go:179] * Done! kubectl is now configured to use "newest-cni-439796" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b49146ca82c2b       56cc512116c8f       9 seconds ago       Running             busybox                   0                   ff2cac73caa52       busybox                                                default
	2d3beb216cf8e       52546a367cc9e       13 seconds ago      Running             coredns                   0                   3fb02eb4fc65c       coredns-66bc5c9577-m5kfb                               kube-system
	9600e46673cac       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   748471ed2feae       storage-provisioner                                    kube-system
	19aee05378ba4       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   a20c2bc133c94       kindnet-sg6pg                                          kube-system
	29cd300441f9d       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   670d86617d069       kube-proxy-9dwtf                                       kube-system
	10e893bdc3051       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   8bde2bb90b7ca       kube-scheduler-default-k8s-diff-port-053182            kube-system
	860d0852403df       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   3f322568bd335       kube-controller-manager-default-k8s-diff-port-053182   kube-system
	3db382118c305       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   f7640c8a1781d       kube-apiserver-default-k8s-diff-port-053182            kube-system
	41b22f3183abd       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   17d1476143614       etcd-default-k8s-diff-port-053182                      kube-system
	
	
	==> containerd <==
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.706098159Z" level=info msg="CreateContainer within sandbox \"748471ed2feae2dcc3604e24f1cd22d43af7cb4bb486af0bbd1c8a1920d159f3\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.706545780Z" level=info msg="StartContainer for \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.707294923Z" level=info msg="connecting to shim 9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837" address="unix:///run/containerd/s/cb06d401419f18cbc1fbf98ed0e8757ec1ebba25bb8006d2862165e7f6c2d548" protocol=ttrpc version=3
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.709383578Z" level=info msg="Container 2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.715180328Z" level=info msg="CreateContainer within sandbox \"3fb02eb4fc65ccaf37f39098545f350d6f6300b98ddf77db7df205de4d248e95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.715979482Z" level=info msg="StartContainer for \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\""
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.716921615Z" level=info msg="connecting to shim 2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896" address="unix:///run/containerd/s/7b7c0c89af559364c2d129f7e92b91267db84fc4fac828ab8cfaf61458db82be" protocol=ttrpc version=3
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.765359161Z" level=info msg="StartContainer for \"9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837\" returns successfully"
	Nov 20 20:55:01 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:01.773693182Z" level=info msg="StartContainer for \"2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896\" returns successfully"
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.548579017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d7fdd532-26fc-4206-b10a-0b4b374325ee,Namespace:default,Attempt:0,}"
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.590215696Z" level=info msg="connecting to shim ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22" address="unix:///run/containerd/s/90a4dcb9aee5c2553c6bcf4ab25b1c345ad9d47ddbabae5564e3927f515cf0e9" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.662814808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d7fdd532-26fc-4206-b10a-0b4b374325ee,Namespace:default,Attempt:0,} returns sandbox id \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\""
	Nov 20 20:55:04 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:04.665041461Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.169105596Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.169864551Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.171245385Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173287130Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173932896Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.508842033s"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.173979477Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.178964277Z" level=info msg="CreateContainer within sandbox \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.186511447Z" level=info msg="Container b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.192936543Z" level=info msg="CreateContainer within sandbox \"ff2cac73caa52e175255f8e9ff2ffd72a8bcff61649027dbcd64be29d0c26e22\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.193667118Z" level=info msg="StartContainer for \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\""
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.194787414Z" level=info msg="connecting to shim b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739" address="unix:///run/containerd/s/90a4dcb9aee5c2553c6bcf4ab25b1c345ad9d47ddbabae5564e3927f515cf0e9" protocol=ttrpc version=3
	Nov 20 20:55:06 default-k8s-diff-port-053182 containerd[660]: time="2025-11-20T20:55:06.252843031Z" level=info msg="StartContainer for \"b49146ca82c2bce453dbeed03d3ca1e0bb01b339a8cdc1dd1daf21c443c22739\" returns successfully"
	
	
	==> coredns [2d3beb216cf8e2114c2cece1569d4889d26206d0960bfc6d8a93565ba4ca5896] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45872 - 15983 "HINFO IN 8088891501838170795.6232109609411617053. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032677784s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-053182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-053182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-053182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_54_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:54:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-053182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:55:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:55:14 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:55:14 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:55:14 +0000   Thu, 20 Nov 2025 20:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:55:14 +0000   Thu, 20 Nov 2025 20:55:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-053182
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                df9984e9-1f83-4a5b-8be1-3539de380cc3
	  Boot ID:                    7bcace10-faf8-4276-88b3-44b8d57bd915
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-m5kfb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-053182                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-sg6pg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-053182             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-053182    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-9dwtf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-053182             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-053182 event: Registered Node default-k8s-diff-port-053182 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-053182 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001791] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400115] i8042: Warning: Keylock active
	[  +0.013837] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499559] block sda: the capability attribute has been deprecated.
	[  +0.087912] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024934] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.433429] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [41b22f3183abd53ebe2e09fc6299d3b3770e1bb5eb3f29a5656b87f782fa33fb] <==
	{"level":"warn","ts":"2025-11-20T20:54:40.780590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.793814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.798957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.806673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.816730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.827291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.837351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.844431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.852510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.861535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.869675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.877831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.886161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.894989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.906287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.914579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.921962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.929679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.937454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.945310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.951929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.967695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.976022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:40.983730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:54:41.043918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55680","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:55:15 up 37 min,  0 user,  load average: 3.45, 2.97, 2.06
	Linux default-k8s-diff-port-053182 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19aee05378ba4959ce80d9bc6b453ea4b001450c586e7a8637f4b6c82aa70dc1] <==
	I1120 20:54:50.885930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:54:50.886185       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 20:54:50.886312       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:54:50.971117       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:54:50.971160       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:54:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:54:51.183002       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:54:51.183070       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:54:51.183083       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:54:51.183236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 20:54:51.572047       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:54:51.572074       1 metrics.go:72] Registering metrics
	I1120 20:54:51.572124       1 controller.go:711] "Syncing nftables rules"
	I1120 20:55:01.186418       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:55:01.186494       1 main.go:301] handling current node
	I1120 20:55:11.185735       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 20:55:11.185774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3db382118c3056b2d8b4ed257f6012e141be4e2f391642de800c6c8b4308cdfa] <==
	I1120 20:54:41.724545       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:54:41.725146       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 20:54:41.727007       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:41.731744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:41.733202       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:54:41.734687       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:54:41.753907       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 20:54:42.627832       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 20:54:42.637662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 20:54:42.637688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:54:43.109690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:54:43.148330       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:54:43.224826       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 20:54:43.231399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 20:54:43.232538       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:54:43.237023       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:54:43.686839       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:54:44.261473       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:54:44.270328       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 20:54:44.277866       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:54:49.541697       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:49.546335       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:54:49.740174       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:54:49.788764       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 20:55:12.387241       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:44870: use of closed network connection
	
	
	==> kube-controller-manager [860d0852403dfa81b9879b3a10cfdbf9452c81cf9849e45c8f5206f57d37b4a8] <==
	I1120 20:54:48.655074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:54:48.661967       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:54:48.669213       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:54:48.676518       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 20:54:48.685284       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:54:48.685331       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:54:48.685331       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:54:48.685377       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:54:48.685399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:54:48.685408       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:54:48.685416       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:54:48.685537       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:54:48.686835       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:54:48.686885       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:54:48.686933       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 20:54:48.686955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:54:48.686994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:54:48.687015       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 20:54:48.687418       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 20:54:48.687004       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:54:48.688056       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 20:54:48.689571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:54:48.689886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:54:48.706282       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:55:03.638274       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29cd300441f9d92488c4ced8b1bb62f46fcaa21732cc0c1ce556887d74710dbf] <==
	I1120 20:54:50.429459       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:54:50.503721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:54:50.604430       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:54:50.604472       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 20:54:50.604591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:54:50.632735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:54:50.632794       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:54:50.638627       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:54:50.638992       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:54:50.639026       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:54:50.640635       1 config.go:200] "Starting service config controller"
	I1120 20:54:50.640662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:54:50.640698       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:54:50.640694       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:54:50.640715       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:54:50.640704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:54:50.640749       1 config.go:309] "Starting node config controller"
	I1120 20:54:50.640759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:54:50.640766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:54:50.741574       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:54:50.741642       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:54:50.741656       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [10e893bdc3051a23e048f7f2812d625e1c495d7a3a82c593dd4edd7fbd1f5824] <==
	E1120 20:54:41.699382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:54:41.698950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:54:41.698993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:41.699023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:41.699519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:54:41.699768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:41.699853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:41.699913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:41.700201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:41.698873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:54:41.699629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:54:41.701036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:42.513986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:54:42.558339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:54:42.587770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:54:42.639008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:54:42.693875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:54:42.814661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:54:42.840916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:54:42.848009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:54:42.905725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:54:42.942934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:54:42.948432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:54:42.952645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1120 20:54:43.292177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: E1120 20:54:45.146522    1466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-053182\" already exists" pod="kube-system/etcd-default-k8s-diff-port-053182"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.182031    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-053182" podStartSLOduration=1.182009204 podStartE2EDuration="1.182009204s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.18199247 +0000 UTC m=+1.159898856" watchObservedRunningTime="2025-11-20 20:54:45.182009204 +0000 UTC m=+1.159915591"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.214618    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-053182" podStartSLOduration=1.214590759 podStartE2EDuration="1.214590759s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.195734202 +0000 UTC m=+1.173640586" watchObservedRunningTime="2025-11-20 20:54:45.214590759 +0000 UTC m=+1.192497149"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.224689    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-053182" podStartSLOduration=2.224668767 podStartE2EDuration="2.224668767s" podCreationTimestamp="2025-11-20 20:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.214807367 +0000 UTC m=+1.192713750" watchObservedRunningTime="2025-11-20 20:54:45.224668767 +0000 UTC m=+1.202575157"
	Nov 20 20:54:45 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:45.239988    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-053182" podStartSLOduration=1.239942503 podStartE2EDuration="1.239942503s" podCreationTimestamp="2025-11-20 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:45.224659978 +0000 UTC m=+1.202566364" watchObservedRunningTime="2025-11-20 20:54:45.239942503 +0000 UTC m=+1.217848887"
	Nov 20 20:54:48 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:48.661762    1466 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:54:48 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:48.662511    1466 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.840140    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-lib-modules\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.841352    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-xtables-lock\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.841422    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz99l\" (UniqueName: \"kubernetes.io/projected/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-kube-api-access-zz99l\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843069    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-lib-modules\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843111    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-cni-cfg\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843137    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3-kube-proxy\") pod \"kube-proxy-9dwtf\" (UID: \"f55e9f10-1b05-4a4c-8db2-9f49bbe4fbb3\") " pod="kube-system/kube-proxy-9dwtf"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843158    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-xtables-lock\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:49 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:49.843208    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9kn\" (UniqueName: \"kubernetes.io/projected/1f060cb7-fe2e-40da-b620-0ae4ab1b46ca-kube-api-access-8m9kn\") pod \"kindnet-sg6pg\" (UID: \"1f060cb7-fe2e-40da-b620-0ae4ab1b46ca\") " pod="kube-system/kindnet-sg6pg"
	Nov 20 20:54:51 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:51.172641    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dwtf" podStartSLOduration=2.17261842 podStartE2EDuration="2.17261842s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:51.161750952 +0000 UTC m=+7.139657341" watchObservedRunningTime="2025-11-20 20:54:51.17261842 +0000 UTC m=+7.150524809"
	Nov 20 20:54:51 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:54:51.184217    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sg6pg" podStartSLOduration=2.184192715 podStartE2EDuration="2.184192715s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:54:51.183993959 +0000 UTC m=+7.161900345" watchObservedRunningTime="2025-11-20 20:54:51.184192715 +0000 UTC m=+7.162099103"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.263549    1466 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326842    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47956acc-9579-4eb7-9d9f-a6e82239fcd8-tmp\") pod \"storage-provisioner\" (UID: \"47956acc-9579-4eb7-9d9f-a6e82239fcd8\") " pod="kube-system/storage-provisioner"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326903    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g59m\" (UniqueName: \"kubernetes.io/projected/47956acc-9579-4eb7-9d9f-a6e82239fcd8-kube-api-access-9g59m\") pod \"storage-provisioner\" (UID: \"47956acc-9579-4eb7-9d9f-a6e82239fcd8\") " pod="kube-system/storage-provisioner"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326946    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7af76736-ef8a-434f-ad0c-b52641f9f02d-config-volume\") pod \"coredns-66bc5c9577-m5kfb\" (UID: \"7af76736-ef8a-434f-ad0c-b52641f9f02d\") " pod="kube-system/coredns-66bc5c9577-m5kfb"
	Nov 20 20:55:01 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:01.326968    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvq5q\" (UniqueName: \"kubernetes.io/projected/7af76736-ef8a-434f-ad0c-b52641f9f02d-kube-api-access-zvq5q\") pod \"coredns-66bc5c9577-m5kfb\" (UID: \"7af76736-ef8a-434f-ad0c-b52641f9f02d\") " pod="kube-system/coredns-66bc5c9577-m5kfb"
	Nov 20 20:55:02 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:02.183864    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.183844095 podStartE2EDuration="12.183844095s" podCreationTimestamp="2025-11-20 20:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:55:02.183557992 +0000 UTC m=+18.161464380" watchObservedRunningTime="2025-11-20 20:55:02.183844095 +0000 UTC m=+18.161750481"
	Nov 20 20:55:02 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:02.193120    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m5kfb" podStartSLOduration=13.193100271 podStartE2EDuration="13.193100271s" podCreationTimestamp="2025-11-20 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:55:02.192763829 +0000 UTC m=+18.170670214" watchObservedRunningTime="2025-11-20 20:55:02.193100271 +0000 UTC m=+18.171006656"
	Nov 20 20:55:04 default-k8s-diff-port-053182 kubelet[1466]: I1120 20:55:04.343774    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55jv\" (UniqueName: \"kubernetes.io/projected/d7fdd532-26fc-4206-b10a-0b4b374325ee-kube-api-access-z55jv\") pod \"busybox\" (UID: \"d7fdd532-26fc-4206-b10a-0b4b374325ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [9600e46673cac711c07280ca4ae551bebb53a0a2f42ed748b9683017d7c9c837] <==
	I1120 20:55:01.776262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 20:55:01.784951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 20:55:01.785000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 20:55:01.787337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.792801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:55:01.793124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 20:55:01.793181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba78d1b6-3e16-4f55-a5b7-7575bdeabcc4", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf became leader
	I1120 20:55:01.793303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf!
	W1120 20:55:01.795391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:01.800399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 20:55:01.893589       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-053182_ba54a89a-9359-4ed7-b2a4-9993d1bb52cf!
	W1120 20:55:03.803513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:03.809123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.812114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:05.816531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:07.823227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:07.829937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:09.834109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:09.839142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:11.842953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:11.847439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:13.851139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:13.855335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.23s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.37
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.92
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.82
22 TestOffline 57.77
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 100.55
29 TestAddons/serial/Volcano 38.12
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
35 TestAddons/parallel/Registry 14.56
36 TestAddons/parallel/RegistryCreds 0.66
37 TestAddons/parallel/Ingress 18.35
38 TestAddons/parallel/InspektorGadget 10.62
39 TestAddons/parallel/MetricsServer 5.64
41 TestAddons/parallel/CSI 47.91
42 TestAddons/parallel/Headlamp 15.45
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 50.63
45 TestAddons/parallel/NvidiaDevicePlugin 5.5
46 TestAddons/parallel/Yakd 10.67
47 TestAddons/parallel/AmdGpuDevicePlugin 5.47
48 TestAddons/StoppedEnableDisable 12.61
49 TestCertOptions 23.83
50 TestCertExpiration 212.61
52 TestForceSystemdFlag 29.47
53 TestForceSystemdEnv 30.54
54 TestDockerEnvContainerd 35.25
58 TestErrorSpam/setup 19.01
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 1.43
62 TestErrorSpam/unpause 1.48
63 TestErrorSpam/stop 1.48
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.91
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.04
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.4
75 TestFunctional/serial/CacheCmd/cache/add_local 0.83
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 47.41
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.19
86 TestFunctional/serial/LogsFileCmd 1.2
87 TestFunctional/serial/InvalidService 4.4
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 9.23
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 8.68
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 23.71
101 TestFunctional/parallel/SSHCmd 0.57
102 TestFunctional/parallel/CpCmd 1.8
103 TestFunctional/parallel/MySQL 20.28
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.82
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.45
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 3
122 TestFunctional/parallel/ImageCommands/Setup 0.48
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
130 TestFunctional/parallel/ProfileCmd/profile_list 0.44
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
134 TestFunctional/parallel/MountCmd/any-port 8.9
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
137 TestFunctional/parallel/ServiceCmd/List 0.96
138 TestFunctional/parallel/ServiceCmd/JSONOutput 1.1
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
140 TestFunctional/parallel/ServiceCmd/Format 0.34
141 TestFunctional/parallel/ServiceCmd/URL 0.36
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.3
147 TestFunctional/parallel/MountCmd/specific-port 1.87
148 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 109.47
163 TestMultiControlPlane/serial/DeployApp 4.92
164 TestMultiControlPlane/serial/PingHostFromPods 1.12
165 TestMultiControlPlane/serial/AddWorkerNode 24.57
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 17.05
169 TestMultiControlPlane/serial/StopSecondaryNode 12.72
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.89
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 94.9
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.31
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 36.04
177 TestMultiControlPlane/serial/RestartCluster 56.2
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 71.36
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 37.96
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 28.24
211 TestKicCustomNetwork/use_default_bridge_network 23.19
212 TestKicExistingNetwork 23.71
213 TestKicCustomSubnet 24.35
214 TestKicStaticIP 28.18
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 47.37
219 TestMountStart/serial/StartWithMountFirst 7.64
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.32
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 6.98
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 62.19
231 TestMultiNode/serial/DeployApp2Nodes 4.14
232 TestMultiNode/serial/PingHostFrom2Pods 0.76
233 TestMultiNode/serial/AddNode 25.24
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.69
237 TestMultiNode/serial/StopNode 2.22
238 TestMultiNode/serial/StartAfterStop 6.82
239 TestMultiNode/serial/RestartKeepsNodes 70.86
240 TestMultiNode/serial/DeleteNode 5.19
241 TestMultiNode/serial/StopMultiNode 23.96
242 TestMultiNode/serial/RestartMultiNode 53.62
243 TestMultiNode/serial/ValidateNameConflict 22.17
248 TestPreload 110.28
250 TestScheduledStopUnix 96.12
253 TestInsufficientStorage 9.6
254 TestRunningBinaryUpgrade 100.56
256 TestKubernetesUpgrade 323.56
257 TestMissingContainerUpgrade 81.34
258 TestStoppedBinaryUpgrade/Setup 0.41
260 TestPause/serial/Start 47.14
261 TestStoppedBinaryUpgrade/Upgrade 116.32
262 TestPause/serial/SecondStartNoReconfiguration 7.53
263 TestPause/serial/Pause 2.19
264 TestPause/serial/VerifyStatus 0.38
265 TestPause/serial/Unpause 0.87
273 TestPause/serial/PauseAgain 0.8
274 TestPause/serial/DeletePaused 2.8
275 TestPause/serial/VerifyDeletedResources 0.59
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
278 TestNoKubernetes/serial/StartWithK8s 23.53
286 TestNetworkPlugins/group/false 6.06
290 TestNoKubernetes/serial/StartWithStopK8s 25
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
292 TestNoKubernetes/serial/Start 9
293 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
295 TestNoKubernetes/serial/ProfileList 1.73
296 TestNoKubernetes/serial/Stop 1.33
297 TestNoKubernetes/serial/StartNoArgs 6.89
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
300 TestStartStop/group/old-k8s-version/serial/FirstStart 49.2
302 TestStartStop/group/no-preload/serial/FirstStart 47.95
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
305 TestStartStop/group/old-k8s-version/serial/Stop 12.02
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 50.27
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
310 TestStartStop/group/no-preload/serial/Stop 12.09
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 44.12
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
317 TestStartStop/group/old-k8s-version/serial/Pause 2.75
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/embed-certs/serial/FirstStart 42.94
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/no-preload/serial/Pause 2.93
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.61
326 TestStartStop/group/newest-cni/serial/FirstStart 28.93
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
330 TestStartStop/group/newest-cni/serial/Stop 1.41
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/newest-cni/serial/SecondStart 10.44
334 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
335 TestStartStop/group/embed-certs/serial/Stop 12.27
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/newest-cni/serial/Pause 2.71
340 TestNetworkPlugins/group/auto/Start 43.2
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
344 TestStartStop/group/embed-certs/serial/SecondStart 51.32
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.53
347 TestNetworkPlugins/group/auto/KubeletFlags 0.33
348 TestNetworkPlugins/group/auto/NetCatPod 8.27
349 TestNetworkPlugins/group/auto/DNS 0.14
350 TestNetworkPlugins/group/auto/Localhost 0.12
351 TestNetworkPlugins/group/auto/HairPin 0.12
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
355 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/embed-certs/serial/Pause 3.33
358 TestNetworkPlugins/group/kindnet/Start 43.18
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.41
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.18
361 TestNetworkPlugins/group/calico/Start 54.22
362 TestNetworkPlugins/group/custom-flannel/Start 48.62
363 TestNetworkPlugins/group/enable-default-cni/Start 73.38
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
366 TestNetworkPlugins/group/kindnet/NetCatPod 9.36
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
369 TestNetworkPlugins/group/kindnet/DNS 0.14
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/kindnet/Localhost 0.1
372 TestNetworkPlugins/group/kindnet/HairPin 0.1
373 TestNetworkPlugins/group/custom-flannel/DNS 0.13
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
376 TestNetworkPlugins/group/calico/KubeletFlags 0.3
377 TestNetworkPlugins/group/calico/NetCatPod 9.19
378 TestNetworkPlugins/group/calico/DNS 0.15
379 TestNetworkPlugins/group/calico/Localhost 0.13
380 TestNetworkPlugins/group/calico/HairPin 0.13
381 TestNetworkPlugins/group/flannel/Start 49.92
382 TestNetworkPlugins/group/bridge/Start 65.22
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.24
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
390 TestNetworkPlugins/group/flannel/NetCatPod 9.17
391 TestNetworkPlugins/group/flannel/DNS 0.13
392 TestNetworkPlugins/group/flannel/Localhost 0.11
393 TestNetworkPlugins/group/flannel/HairPin 0.11
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
395 TestNetworkPlugins/group/bridge/NetCatPod 8.19
396 TestNetworkPlugins/group/bridge/DNS 0.13
397 TestNetworkPlugins/group/bridge/Localhost 0.12
398 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-566368 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-566368 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.37449438s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1120 20:21:01.887460    7731 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1120 20:21:01.887529    7731 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-566368
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-566368: exit status 85 (71.962399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-566368 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-566368 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:20:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:20:57.563339    7743 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:20:57.563624    7743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:20:57.563636    7743 out.go:374] Setting ErrFile to fd 2...
	I1120 20:20:57.563641    7743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:20:57.563836    7743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	W1120 20:20:57.563965    7743 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21923-3769/.minikube/config/config.json: open /home/jenkins/minikube-integration/21923-3769/.minikube/config/config.json: no such file or directory
	I1120 20:20:57.564458    7743 out.go:368] Setting JSON to true
	I1120 20:20:57.565451    7743 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":210,"bootTime":1763669848,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:20:57.565552    7743 start.go:143] virtualization: kvm guest
	I1120 20:20:57.567815    7743 out.go:99] [download-only-566368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:20:57.567969    7743 notify.go:221] Checking for updates...
	W1120 20:20:57.568008    7743 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball: no such file or directory
	I1120 20:20:57.569397    7743 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:20:57.570686    7743 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:20:57.572001    7743 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:20:57.573588    7743 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:20:57.574951    7743 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1120 20:20:57.577180    7743 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:20:57.577461    7743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:20:57.603274    7743 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:20:57.603340    7743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:20:57.992169    7743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-20 20:20:57.982954943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:20:57.992275    7743 docker.go:319] overlay module found
	I1120 20:20:57.993883    7743 out.go:99] Using the docker driver based on user configuration
	I1120 20:20:57.993920    7743 start.go:309] selected driver: docker
	I1120 20:20:57.993928    7743 start.go:930] validating driver "docker" against <nil>
	I1120 20:20:57.994009    7743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:20:58.054486    7743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-20 20:20:58.045070533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:20:58.054641    7743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:20:58.055122    7743 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1120 20:20:58.055273    7743 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:20:58.057057    7743 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-566368 host does not exist
	  To start a cluster, run: "minikube start -p download-only-566368"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-566368
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-336340 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-336340 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.919932252s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1120 20:21:06.242682    7731 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1120 20:21:06.242723    7731 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-336340
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-336340: exit status 85 (75.615387ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-566368 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-566368 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-566368                                                                                                                                                               │ download-only-566368 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-336340 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-336340 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:02.372830    8099 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:02.373061    8099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:02.373069    8099 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:02.373073    8099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:02.373268    8099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:21:02.373720    8099 out.go:368] Setting JSON to true
	I1120 20:21:02.374494    8099 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":214,"bootTime":1763669848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:02.374583    8099 start.go:143] virtualization: kvm guest
	I1120 20:21:02.376606    8099 out.go:99] [download-only-336340] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:02.376784    8099 notify.go:221] Checking for updates...
	I1120 20:21:02.378138    8099 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:21:02.379727    8099 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:02.381113    8099 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:21:02.382462    8099 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:21:02.383728    8099 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1120 20:21:02.386243    8099 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:21:02.386471    8099 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:02.408641    8099 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:21:02.408740    8099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:02.466205    8099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-20 20:21:02.456499509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:21:02.466295    8099 docker.go:319] overlay module found
	I1120 20:21:02.467976    8099 out.go:99] Using the docker driver based on user configuration
	I1120 20:21:02.468012    8099 start.go:309] selected driver: docker
	I1120 20:21:02.468020    8099 start.go:930] validating driver "docker" against <nil>
	I1120 20:21:02.468102    8099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:02.524515    8099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-20 20:21:02.514298571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:21:02.524672    8099 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:02.525123    8099 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1120 20:21:02.525255    8099 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:21:02.527087    8099 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-336340 host does not exist
	  To start a cluster, run: "minikube start -p download-only-336340"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-336340
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-683208 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-683208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-683208
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1120 20:21:07.370290    7731 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-747132 --alsologtostderr --binary-mirror http://127.0.0.1:37389 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-747132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-747132
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (57.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-030110 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-030110 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (49.464256723s)
helpers_test.go:175: Cleaning up "offline-containerd-030110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-030110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-030110: (8.304818082s)
--- PASS: TestOffline (57.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-775382
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-775382: exit status 85 (63.426551ms)

                                                
                                                
-- stdout --
	* Profile "addons-775382" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775382"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-775382
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-775382: exit status 85 (63.028324ms)

                                                
                                                
-- stdout --
	* Profile "addons-775382" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775382"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (100.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-775382 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-775382 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m40.553131693s)
--- PASS: TestAddons/Setup (100.55s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.12s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 16.61759ms
addons_test.go:884: volcano-controller stabilized in 16.671924ms
addons_test.go:876: volcano-admission stabilized in 16.712755ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-fxhcs" [f0614dcd-9a68-4d7c-b69c-7598d8f4850e] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.051701262s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-mkzw4" [f4bd911b-68de-44ed-96e4-3168451e318d] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003063601s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-w8f7z" [85d92729-3166-4637-8394-6653aad3b048] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003363998s
addons_test.go:903: (dbg) Run:  kubectl --context addons-775382 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-775382 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-775382 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [1ac6bd13-7db1-494d-b12e-b1eed36be025] Pending
helpers_test.go:352: "test-job-nginx-0" [1ac6bd13-7db1-494d-b12e-b1eed36be025] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [1ac6bd13-7db1-494d-b12e-b1eed36be025] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003873005s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable volcano --alsologtostderr -v=1: (11.690552926s)
--- PASS: TestAddons/serial/Volcano (38.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-775382 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-775382 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-775382 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-775382 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [be38301b-0d3c-4076-8ff2-b19d1dc01b95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [be38301b-0d3c-4076-8ff2-b19d1dc01b95] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00371675s
addons_test.go:694: (dbg) Run:  kubectl --context addons-775382 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-775382 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-775382 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.856292ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-xrlrl" [a1de6550-ff18-4773-b7e9-03d798c39341] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002391153s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mskw8" [7a2a3180-1119-4eed-9d51-0fdbda15aa76] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003478399s
addons_test.go:392: (dbg) Run:  kubectl --context addons-775382 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-775382 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-775382 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.778422285s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 ip
2025/11/20 20:23:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.56s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 51.163927ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-775382
addons_test.go:332: (dbg) Run:  kubectl --context addons-775382 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-775382 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-775382 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-775382 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6ad52e23-94d0-4228-9ee8-960dffb359a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6ad52e23-94d0-4228-9ee8-960dffb359a3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003833302s
I1120 20:24:12.984777    7731 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-775382 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable ingress-dns --alsologtostderr -v=1: (1.524673278s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable ingress --alsologtostderr -v=1: (7.666841829s)
--- PASS: TestAddons/parallel/Ingress (18.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7fhjr" [b02da320-f86b-48a4-8ee1-17e8f08924a0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003623217s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable inspektor-gadget --alsologtostderr -v=1: (5.617345667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.082297ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8j9xs" [a332f01b-3264-4ca7-9736-b40aadc46dcb] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003557852s
addons_test.go:463: (dbg) Run:  kubectl --context addons-775382 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1120 20:23:59.816878    7731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1120 20:23:59.820023    7731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1120 20:23:59.820042    7731 kapi.go:107] duration metric: took 3.179729ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.187795ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-775382 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-775382 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d7d05d66-38f8-4884-acad-ec298c3b2329] Pending
helpers_test.go:352: "task-pv-pod" [d7d05d66-38f8-4884-acad-ec298c3b2329] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d7d05d66-38f8-4884-acad-ec298c3b2329] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003615422s
addons_test.go:572: (dbg) Run:  kubectl --context addons-775382 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775382 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775382 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-775382 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-775382 delete pod task-pv-pod: (1.000465231s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-775382 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-775382 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-775382 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b1e4357c-d8f0-4c19-8b6c-528c07c70371] Pending
helpers_test.go:352: "task-pv-pod-restore" [b1e4357c-d8f0-4c19-8b6c-528c07c70371] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b1e4357c-d8f0-4c19-8b6c-528c07c70371] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003763787s
addons_test.go:614: (dbg) Run:  kubectl --context addons-775382 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-775382 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-775382 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.499816762s)
--- PASS: TestAddons/parallel/CSI (47.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-775382 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vqxnv" [366a8797-ad6d-4e93-bcab-d1f09b3558ee] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vqxnv" [366a8797-ad6d-4e93-bcab-d1f09b3558ee] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vqxnv" [366a8797-ad6d-4e93-bcab-d1f09b3558ee] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.002807022s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable headlamp --alsologtostderr -v=1: (5.64807459s)
--- PASS: TestAddons/parallel/Headlamp (15.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-lk8c5" [1825ad4f-5304-44fd-a7b4-6dfdc02b7dc5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003152846s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-775382 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-775382 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [fdad5f66-b648-4230-8198-050a0dea2dff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [fdad5f66-b648-4230-8198-050a0dea2dff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [fdad5f66-b648-4230-8198-050a0dea2dff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002754801s
addons_test.go:967: (dbg) Run:  kubectl --context addons-775382 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 ssh "cat /opt/local-path-provisioner/pvc-b6af4f40-4f21-47dd-8451-c4eb8cd1cd54_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-775382 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-775382 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.740454248s)
--- PASS: TestAddons/parallel/LocalPath (50.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qgs9h" [2035563d-94dc-4ebc-9054-5e9500c7a2f2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004177077s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-mxlsc" [bf486160-c56d-4419-a68d-4b6f09bfc84c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003333713s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775382 addons disable yakd --alsologtostderr -v=1: (5.667951323s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-w4cmf" [fbf2616d-79f4-41dd-909d-cfd299d23710] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003006554s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775382 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-775382
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-775382: (12.329575822s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-775382
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-775382
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-775382
--- PASS: TestAddons/StoppedEnableDisable (12.61s)

                                                
                                    
x
+
TestCertOptions (23.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-636195 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.007276891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-636195 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-636195 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-636195 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-636195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-636195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-636195: (2.049336296s)
--- PASS: TestCertOptions (23.83s)

                                                
                                    
x
+
TestCertExpiration (212.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-137718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-137718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (22.784945305s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-137718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.9376631s)
helpers_test.go:175: Cleaning up "cert-expiration-137718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-137718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-137718: (3.887343316s)
--- PASS: TestCertExpiration (212.61s)

                                                
                                    
x
+
TestForceSystemdFlag (29.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-431737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.070658513s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-431737 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-431737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-431737
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-431737: (4.041854146s)
--- PASS: TestForceSystemdFlag (29.47s)

                                                
                                    
x
+
TestForceSystemdEnv (30.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-967977 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-967977 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.798558423s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-967977 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-967977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-967977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-967977: (2.470455594s)
--- PASS: TestForceSystemdEnv (30.54s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.25s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-363652 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-363652 --driver=docker  --container-runtime=containerd: (19.984109012s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-363652"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXfRRip7/agent.31862" SSH_AGENT_PID="31863" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXfRRip7/agent.31862" SSH_AGENT_PID="31863" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXfRRip7/agent.31862" SSH_AGENT_PID="31863" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.059914022s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXfRRip7/agent.31862" SSH_AGENT_PID="31863" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-363652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-363652
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-363652: (2.286948328s)
--- PASS: TestDockerEnvContainerd (35.25s)

                                                
                                    
x
+
TestErrorSpam/setup (19.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-970443 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-970443 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-970443 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-970443 --driver=docker  --container-runtime=containerd: (19.009508631s)
--- PASS: TestErrorSpam/setup (19.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 stop: (1.277318941s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-970443 --log_dir /tmp/nospam-970443 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21923-3769/.minikube/files/etc/test/nested/copy/7731/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-199012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (41.913188172s)
--- PASS: TestFunctional/serial/StartWithProxy (41.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1120 20:26:52.238468    7731 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-199012 --alsologtostderr -v=8: (6.038547183s)
functional_test.go:678: soft start took 6.039243035s for "functional-199012" cluster.
I1120 20:26:58.277309    7731 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-199012 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-199012 /tmp/TestFunctionalserialCacheCmdcacheadd_local3280832759/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache add minikube-local-cache-test:functional-199012
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache delete minikube-local-cache-test:functional-199012
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-199012
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.699276ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 kubectl -- --context functional-199012 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-199012 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1120 20:27:48.808681    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:48.815039    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:48.826406    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:48.847752    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:48.889155    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:48.970599    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:49.132112    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:49.453762    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:27:50.095140    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-199012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.41206524s)
functional_test.go:776: restart took 47.412158236s for "functional-199012" cluster.
I1120 20:27:51.296805    7731 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-199012 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 logs
E1120 20:27:51.376889    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-199012 logs: (1.190621198s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 logs --file /tmp/TestFunctionalserialLogsFileCmd295973956/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-199012 logs --file /tmp/TestFunctionalserialLogsFileCmd295973956/001/logs.txt: (1.198921188s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-199012 apply -f testdata/invalidsvc.yaml
E1120 20:27:53.938900    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-199012
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-199012: exit status 115 (336.198869ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30512 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-199012 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 config get cpus: exit status 14 (77.594575ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 config get cpus: exit status 14 (61.71221ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199012 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199012 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51007: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (193.38454ms)

                                                
                                                
-- stdout --
	* [functional-199012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:28:04.942254   50255 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:28:04.942379   50255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:28:04.942393   50255 out.go:374] Setting ErrFile to fd 2...
	I1120 20:28:04.942400   50255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:28:04.942648   50255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:28:04.943221   50255 out.go:368] Setting JSON to false
	I1120 20:28:04.944588   50255 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":637,"bootTime":1763669848,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:28:04.944706   50255 start.go:143] virtualization: kvm guest
	I1120 20:28:04.947673   50255 out.go:179] * [functional-199012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:28:04.949226   50255 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:28:04.949225   50255 notify.go:221] Checking for updates...
	I1120 20:28:04.950759   50255 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:28:04.952244   50255 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:28:04.953657   50255 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:28:04.954990   50255 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:28:04.956499   50255 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:28:04.958955   50255 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:28:04.959608   50255 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:28:04.986512   50255 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:28:04.986638   50255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:28:05.056663   50255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-20 20:28:05.045806593 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:28:05.056815   50255 docker.go:319] overlay module found
	I1120 20:28:05.059080   50255 out.go:179] * Using the docker driver based on existing profile
	I1120 20:28:05.061115   50255 start.go:309] selected driver: docker
	I1120 20:28:05.061134   50255 start.go:930] validating driver "docker" against &{Name:functional-199012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199012 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:28:05.061245   50255 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:28:05.063720   50255 out.go:203] 
	W1120 20:28:05.065470   50255 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1120 20:28:05.067222   50255 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (195.949528ms)

                                                
                                                
-- stdout --
	* [functional-199012] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:28:05.392326   50589 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:28:05.392492   50589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:28:05.392505   50589 out.go:374] Setting ErrFile to fd 2...
	I1120 20:28:05.392512   50589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:28:05.392921   50589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:28:05.393499   50589 out.go:368] Setting JSON to false
	I1120 20:28:05.394890   50589 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":637,"bootTime":1763669848,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:28:05.395012   50589 start.go:143] virtualization: kvm guest
	I1120 20:28:05.397352   50589 out.go:179] * [functional-199012] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1120 20:28:05.399639   50589 notify.go:221] Checking for updates...
	I1120 20:28:05.399652   50589 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:28:05.401415   50589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:28:05.402972   50589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:28:05.404628   50589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:28:05.406403   50589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:28:05.408033   50589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:28:05.410109   50589 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:28:05.410799   50589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:28:05.437249   50589 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:28:05.437359   50589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:28:05.508259   50589 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-20 20:28:05.497762034 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:28:05.508421   50589 docker.go:319] overlay module found
	I1120 20:28:05.510713   50589 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 20:28:05.512176   50589 start.go:309] selected driver: docker
	I1120 20:28:05.512195   50589 start.go:930] validating driver "docker" against &{Name:functional-199012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199012 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:28:05.512331   50589 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:28:05.514278   50589 out.go:203] 
	W1120 20:28:05.515700   50589 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 20:28:05.516957   50589 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-199012 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-199012 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-t5z7m" [7ce9f544-1e54-445d-95cb-f2c5830b38b2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-t5z7m" [7ce9f544-1e54-445d-95cb-f2c5830b38b2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003175421s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30631
functional_test.go:1680: http://192.168.49.2:30631: success! body:
Request served by hello-node-connect-7d85dfc575-t5z7m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30631
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b94f719d-8288-477e-ac5e-22a98ab337ef] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00422298s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-199012 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-199012 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-199012 get pvc myclaim -o=json
I1120 20:28:22.057049    7731 retry.go:31] will retry after 1.00698573s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:034e8d9c-4c5c-4c15-8f43-5aa2197f0b34 ResourceVersion:802 Generation:0 CreationTimestamp:2025-11-20 20:28:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001d34730 VolumeMode:0xc001d34740 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-199012 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-199012 apply -f testdata/storage-provisioner/pod.yaml
I1120 20:28:23.233564    7731 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0d7af5ab-c66e-4ec3-bb4c-325b7aa0c39c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0d7af5ab-c66e-4ec3-bb4c-325b7aa0c39c] Running
E1120 20:28:29.783773    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00306235s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-199012 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-199012 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-199012 apply -f testdata/storage-provisioner/pod.yaml
I1120 20:28:33.371026    7731 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b9a57b5c-65c2-40b3-86d7-b9e4b1ca14f6] Pending
helpers_test.go:352: "sp-pod" [b9a57b5c-65c2-40b3-86d7-b9e4b1ca14f6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003755562s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-199012 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh -n functional-199012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cp functional-199012:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd62927865/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh -n functional-199012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh -n functional-199012 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-199012 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-5lwvn" [29a2c14d-e77d-4bed-808d-80edba35c560] Pending
helpers_test.go:352: "mysql-5bb876957f-5lwvn" [29a2c14d-e77d-4bed-808d-80edba35c560] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-5lwvn" [29a2c14d-e77d-4bed-808d-80edba35c560] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003045379s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;": exit status 1 (150.315429ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1120 20:28:14.520491    7731 retry.go:31] will retry after 507.74936ms: exit status 1
2025/11/20 20:28:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;": exit status 1 (130.105503ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1120 20:28:15.158923    7731 retry.go:31] will retry after 1.577044992s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;": exit status 1 (115.919957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1120 20:28:16.852509    7731 retry.go:31] will retry after 1.479406455s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199012 exec mysql-5bb876957f-5lwvn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7731/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /etc/test/nested/copy/7731/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7731.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /etc/ssl/certs/7731.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7731.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /usr/share/ca-certificates/7731.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/77312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /etc/ssl/certs/77312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/77312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /usr/share/ca-certificates/77312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-199012 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "sudo systemctl is-active docker": exit status 1 (290.342529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "sudo systemctl is-active crio": exit status 1 (286.369868ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-199012 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-199012 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-c4nfc" [42dc91ad-c037-48de-a82a-464401b79952] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-c4nfc" [42dc91ad-c037-48de-a82a-464401b79952] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003609899s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199012 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-199012
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-199012
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199012 image ls --format short --alsologtostderr:
I1120 20:28:19.413004   55806 out.go:360] Setting OutFile to fd 1 ...
I1120 20:28:19.413297   55806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:19.413307   55806 out.go:374] Setting ErrFile to fd 2...
I1120 20:28:19.413312   55806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:19.413507   55806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
I1120 20:28:19.414008   55806 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:19.414116   55806 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:19.414513   55806 cli_runner.go:164] Run: docker container inspect functional-199012 --format={{.State.Status}}
I1120 20:28:19.432277   55806 ssh_runner.go:195] Run: systemctl --version
I1120 20:28:19.432337   55806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199012
I1120 20:28:19.449283   55806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/functional-199012/id_rsa Username:docker}
I1120 20:28:19.542004   55806 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199012 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-199012  │ sha256:8bfd51 │ 992B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ docker.io/kicbase/echo-server               │ functional-199012  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ localhost/my-image                          │ functional-199012  │ sha256:d2e181 │ 775kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199012 image ls --format table --alsologtostderr:
I1120 20:28:22.734658   56361 out.go:360] Setting OutFile to fd 1 ...
I1120 20:28:22.734886   56361 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:22.734894   56361 out.go:374] Setting ErrFile to fd 2...
I1120 20:28:22.734897   56361 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:22.735090   56361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
I1120 20:28:22.735646   56361 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:22.735733   56361 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:22.736094   56361 cli_runner.go:164] Run: docker container inspect functional-199012 --format={{.State.Status}}
I1120 20:28:22.754238   56361 ssh_runner.go:195] Run: systemctl --version
I1120 20:28:22.754301   56361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199012
I1120 20:28:22.772831   56361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/functional-199012/id_rsa Username:docker}
I1120 20:28:22.867928   56361 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199012 image ls --format json --alsologtostderr:
[{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-199012","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:8bfd511426ab66c2072cdd3eae25e09fea53dafc5f27f1cf301191516786847a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-199012"],"size":"992"},{"id":"sha256:d2e181e84a0450db519e4289875977c158b1bf32f38e15eee68610a0675f1928","repoDigests":[],"repoTags":["localhost/my-image:functional-199012"],"size":"774889"},{"id":"sha256:c3994bc
6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"s
ize":"19746404"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["regi
stry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199012 image ls --format json --alsologtostderr:
I1120 20:28:22.505157   56263 out.go:360] Setting OutFile to fd 1 ...
I1120 20:28:22.505281   56263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:22.505291   56263 out.go:374] Setting ErrFile to fd 2...
I1120 20:28:22.505295   56263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:22.505508   56263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
I1120 20:28:22.506084   56263 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:22.506190   56263 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:22.506591   56263 cli_runner.go:164] Run: docker container inspect functional-199012 --format={{.State.Status}}
I1120 20:28:22.527230   56263 ssh_runner.go:195] Run: systemctl --version
I1120 20:28:22.527291   56263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199012
I1120 20:28:22.548479   56263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/functional-199012/id_rsa Username:docker}
I1120 20:28:22.646207   56263 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199012 image ls --format yaml --alsologtostderr:
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:8bfd511426ab66c2072cdd3eae25e09fea53dafc5f27f1cf301191516786847a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-199012
size: "992"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-199012
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199012 image ls --format yaml --alsologtostderr:
I1120 20:28:19.626685   55877 out.go:360] Setting OutFile to fd 1 ...
I1120 20:28:19.626946   55877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:19.626957   55877 out.go:374] Setting ErrFile to fd 2...
I1120 20:28:19.626961   55877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:19.627122   55877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
I1120 20:28:19.627671   55877 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:19.627758   55877 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:19.628082   55877 cli_runner.go:164] Run: docker container inspect functional-199012 --format={{.State.Status}}
I1120 20:28:19.646127   55877 ssh_runner.go:195] Run: systemctl --version
I1120 20:28:19.646166   55877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199012
I1120 20:28:19.663500   55877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/functional-199012/id_rsa Username:docker}
I1120 20:28:19.756039   55877 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh pgrep buildkitd: exit status 1 (262.440774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image build -t localhost/my-image:functional-199012 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-199012 image build -t localhost/my-image:functional-199012 testdata/build --alsologtostderr: (2.513334581s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199012 image build -t localhost/my-image:functional-199012 testdata/build --alsologtostderr:
I1120 20:28:20.105173   56037 out.go:360] Setting OutFile to fd 1 ...
I1120 20:28:20.105317   56037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:20.105326   56037 out.go:374] Setting ErrFile to fd 2...
I1120 20:28:20.105330   56037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:28:20.105531   56037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
I1120 20:28:20.106095   56037 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:20.106697   56037 config.go:182] Loaded profile config "functional-199012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:28:20.107064   56037 cli_runner.go:164] Run: docker container inspect functional-199012 --format={{.State.Status}}
I1120 20:28:20.124714   56037 ssh_runner.go:195] Run: systemctl --version
I1120 20:28:20.124767   56037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199012
I1120 20:28:20.141681   56037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/functional-199012/id_rsa Username:docker}
I1120 20:28:20.234827   56037 build_images.go:162] Building image from path: /tmp/build.401635437.tar
I1120 20:28:20.234896   56037 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1120 20:28:20.242764   56037 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.401635437.tar
I1120 20:28:20.246292   56037 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.401635437.tar: stat -c "%s %y" /var/lib/minikube/build/build.401635437.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.401635437.tar': No such file or directory
I1120 20:28:20.246323   56037 ssh_runner.go:362] scp /tmp/build.401635437.tar --> /var/lib/minikube/build/build.401635437.tar (3072 bytes)
I1120 20:28:20.263766   56037 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.401635437
I1120 20:28:20.271144   56037 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.401635437 -xf /var/lib/minikube/build/build.401635437.tar
I1120 20:28:20.278819   56037 containerd.go:394] Building image: /var/lib/minikube/build/build.401635437
I1120 20:28:20.278870   56037 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.401635437 --local dockerfile=/var/lib/minikube/build/build.401635437 --output type=image,name=localhost/my-image:functional-199012
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d6f0097403af3a0ac2dd4c8eb8206f249d708085ade7f23cde01c0170128a836 done
#8 exporting config sha256:d2e181e84a0450db519e4289875977c158b1bf32f38e15eee68610a0675f1928
#8 exporting config sha256:d2e181e84a0450db519e4289875977c158b1bf32f38e15eee68610a0675f1928 done
#8 naming to localhost/my-image:functional-199012 done
#8 DONE 0.1s
I1120 20:28:22.540771   56037 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.401635437 --local dockerfile=/var/lib/minikube/build/build.401635437 --output type=image,name=localhost/my-image:functional-199012: (2.261874071s)
I1120 20:28:22.540864   56037 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.401635437
I1120 20:28:22.550279   56037 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.401635437.tar
I1120 20:28:22.558006   56037 build_images.go:218] Built localhost/my-image:functional-199012 from /tmp/build.401635437.tar
I1120 20:28:22.558038   56037 build_images.go:134] succeeded building to: functional-199012
I1120 20:28:22.558044   56037 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-199012
E1120 20:27:59.060440    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image load --daemon kicbase/echo-server:functional-199012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image load --daemon kicbase/echo-server:functional-199012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-199012
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image load --daemon kicbase/echo-server:functional-199012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "376.340531ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.371664ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image save kicbase/echo-server:functional-199012 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "388.679844ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.951958ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image rm kicbase/echo-server:functional-199012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdany-port178510967/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763670483678341145" to /tmp/TestFunctionalparallelMountCmdany-port178510967/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763670483678341145" to /tmp/TestFunctionalparallelMountCmdany-port178510967/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763670483678341145" to /tmp/TestFunctionalparallelMountCmdany-port178510967/001/test-1763670483678341145
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.261988ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:28:04.005986    7731 retry.go:31] will retry after 373.963746ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 20 20:28 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 20 20:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 20 20:28 test-1763670483678341145
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh cat /mount-9p/test-1763670483678341145
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-199012 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [380c8671-72c9-4e04-a2fd-d1471b6f36cc] Pending
helpers_test.go:352: "busybox-mount" [380c8671-72c9-4e04-a2fd-d1471b6f36cc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [380c8671-72c9-4e04-a2fd-d1471b6f36cc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [380c8671-72c9-4e04-a2fd-d1471b6f36cc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003973425s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-199012 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdany-port178510967/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-199012
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 image save --daemon kicbase/echo-server:functional-199012 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-199012
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-199012 service list -o json: (1.09672023s)
functional_test.go:1504: Took "1.096810014s" to run "out/minikube-linux-amd64 -p functional-199012 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31214
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 service hello-node --url
E1120 20:28:09.302352    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31214
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 51868: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-199012 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b4dd5c3d-3206-4da2-b65a-bf296fb80f98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b4dd5c3d-3206-4da2-b65a-bf296fb80f98] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.006602428s
I1120 20:28:22.224684    7731 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdspecific-port1962769973/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.414131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:28:12.895781    7731 retry.go:31] will retry after 470.112839ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdspecific-port1962769973/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "sudo umount -f /mount-9p": exit status 1 (295.620641ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-199012 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdspecific-port1962769973/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T" /mount1: exit status 1 (384.106785ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:28:14.843858    7731 retry.go:31] will retry after 693.586156ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199012 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-199012 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2289910975/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-199012 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.203.136 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-199012 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-199012
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-199012
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-199012
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (109.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1120 20:29:10.745533    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:30:32.666920    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m48.753712358s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (109.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 kubectl -- rollout status deployment/busybox: (2.831803335s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-hkwhp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-n6wxp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-pbxtm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-hkwhp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-n6wxp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-pbxtm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-hkwhp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-n6wxp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-pbxtm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-hkwhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-hkwhp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-n6wxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-n6wxp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-pbxtm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 kubectl -- exec busybox-7b57f96db7-pbxtm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 node add --alsologtostderr -v 5: (23.69296117s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-025200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp testdata/cp-test.txt ha-025200:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1787966102/001/cp-test_ha-025200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200:/home/docker/cp-test.txt ha-025200-m02:/home/docker/cp-test_ha-025200_ha-025200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test_ha-025200_ha-025200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200:/home/docker/cp-test.txt ha-025200-m03:/home/docker/cp-test_ha-025200_ha-025200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test_ha-025200_ha-025200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200:/home/docker/cp-test.txt ha-025200-m04:/home/docker/cp-test_ha-025200_ha-025200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test_ha-025200_ha-025200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp testdata/cp-test.txt ha-025200-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1787966102/001/cp-test_ha-025200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m02:/home/docker/cp-test.txt ha-025200:/home/docker/cp-test_ha-025200-m02_ha-025200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test_ha-025200-m02_ha-025200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m02:/home/docker/cp-test.txt ha-025200-m03:/home/docker/cp-test_ha-025200-m02_ha-025200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test_ha-025200-m02_ha-025200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m02:/home/docker/cp-test.txt ha-025200-m04:/home/docker/cp-test_ha-025200-m02_ha-025200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test_ha-025200-m02_ha-025200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp testdata/cp-test.txt ha-025200-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1787966102/001/cp-test_ha-025200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m03:/home/docker/cp-test.txt ha-025200:/home/docker/cp-test_ha-025200-m03_ha-025200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test_ha-025200-m03_ha-025200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m03:/home/docker/cp-test.txt ha-025200-m02:/home/docker/cp-test_ha-025200-m03_ha-025200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test_ha-025200-m03_ha-025200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m03:/home/docker/cp-test.txt ha-025200-m04:/home/docker/cp-test_ha-025200-m03_ha-025200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test_ha-025200-m03_ha-025200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp testdata/cp-test.txt ha-025200-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1787966102/001/cp-test_ha-025200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m04:/home/docker/cp-test.txt ha-025200:/home/docker/cp-test_ha-025200-m04_ha-025200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200 "sudo cat /home/docker/cp-test_ha-025200-m04_ha-025200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m04:/home/docker/cp-test.txt ha-025200-m02:/home/docker/cp-test_ha-025200-m04_ha-025200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m02 "sudo cat /home/docker/cp-test_ha-025200-m04_ha-025200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 cp ha-025200-m04:/home/docker/cp-test.txt ha-025200-m03:/home/docker/cp-test_ha-025200-m04_ha-025200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 ssh -n ha-025200-m03 "sudo cat /home/docker/cp-test_ha-025200-m04_ha-025200-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 node stop m02 --alsologtostderr -v 5: (12.012426495s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5: exit status 7 (706.352311ms)

                                                
                                                
-- stdout --
	ha-025200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-025200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:34.165775   77804 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:34.165907   77804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:34.165916   77804 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:34.165920   77804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:34.166113   77804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:31:34.166269   77804 out.go:368] Setting JSON to false
	I1120 20:31:34.166299   77804 mustload.go:66] Loading cluster: ha-025200
	I1120 20:31:34.166400   77804 notify.go:221] Checking for updates...
	I1120 20:31:34.166716   77804 config.go:182] Loaded profile config "ha-025200": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:31:34.166734   77804 status.go:174] checking status of ha-025200 ...
	I1120 20:31:34.167175   77804 cli_runner.go:164] Run: docker container inspect ha-025200 --format={{.State.Status}}
	I1120 20:31:34.187739   77804 status.go:371] ha-025200 host status = "Running" (err=<nil>)
	I1120 20:31:34.187774   77804 host.go:66] Checking if "ha-025200" exists ...
	I1120 20:31:34.188156   77804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-025200
	I1120 20:31:34.208268   77804 host.go:66] Checking if "ha-025200" exists ...
	I1120 20:31:34.208702   77804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:31:34.208755   77804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-025200
	I1120 20:31:34.227349   77804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/ha-025200/id_rsa Username:docker}
	I1120 20:31:34.322055   77804 ssh_runner.go:195] Run: systemctl --version
	I1120 20:31:34.328656   77804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:31:34.341676   77804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:31:34.400600   77804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-20 20:31:34.39095757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:31:34.401142   77804 kubeconfig.go:125] found "ha-025200" server: "https://192.168.49.254:8443"
	I1120 20:31:34.401172   77804 api_server.go:166] Checking apiserver status ...
	I1120 20:31:34.401214   77804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:31:34.413651   77804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1120 20:31:34.422235   77804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:31:34.422289   77804 ssh_runner.go:195] Run: ls
	I1120 20:31:34.426155   77804 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:31:34.432254   77804 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:31:34.432279   77804 status.go:463] ha-025200 apiserver status = Running (err=<nil>)
	I1120 20:31:34.432293   77804 status.go:176] ha-025200 status: &{Name:ha-025200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:31:34.432313   77804 status.go:174] checking status of ha-025200-m02 ...
	I1120 20:31:34.432598   77804 cli_runner.go:164] Run: docker container inspect ha-025200-m02 --format={{.State.Status}}
	I1120 20:31:34.451553   77804 status.go:371] ha-025200-m02 host status = "Stopped" (err=<nil>)
	I1120 20:31:34.451575   77804 status.go:384] host is not running, skipping remaining checks
	I1120 20:31:34.451583   77804 status.go:176] ha-025200-m02 status: &{Name:ha-025200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:31:34.451609   77804 status.go:174] checking status of ha-025200-m03 ...
	I1120 20:31:34.451869   77804 cli_runner.go:164] Run: docker container inspect ha-025200-m03 --format={{.State.Status}}
	I1120 20:31:34.469962   77804 status.go:371] ha-025200-m03 host status = "Running" (err=<nil>)
	I1120 20:31:34.469986   77804 host.go:66] Checking if "ha-025200-m03" exists ...
	I1120 20:31:34.470251   77804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-025200-m03
	I1120 20:31:34.488752   77804 host.go:66] Checking if "ha-025200-m03" exists ...
	I1120 20:31:34.489044   77804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:31:34.489092   77804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-025200-m03
	I1120 20:31:34.508018   77804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/ha-025200-m03/id_rsa Username:docker}
	I1120 20:31:34.600711   77804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:31:34.614084   77804 kubeconfig.go:125] found "ha-025200" server: "https://192.168.49.254:8443"
	I1120 20:31:34.614114   77804 api_server.go:166] Checking apiserver status ...
	I1120 20:31:34.614150   77804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:31:34.625973   77804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1287/cgroup
	W1120 20:31:34.635469   77804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1287/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:31:34.635534   77804 ssh_runner.go:195] Run: ls
	I1120 20:31:34.639571   77804 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:31:34.643791   77804 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:31:34.643819   77804 status.go:463] ha-025200-m03 apiserver status = Running (err=<nil>)
	I1120 20:31:34.643830   77804 status.go:176] ha-025200-m03 status: &{Name:ha-025200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:31:34.643849   77804 status.go:174] checking status of ha-025200-m04 ...
	I1120 20:31:34.644094   77804 cli_runner.go:164] Run: docker container inspect ha-025200-m04 --format={{.State.Status}}
	I1120 20:31:34.663604   77804 status.go:371] ha-025200-m04 host status = "Running" (err=<nil>)
	I1120 20:31:34.663626   77804 host.go:66] Checking if "ha-025200-m04" exists ...
	I1120 20:31:34.663880   77804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-025200-m04
	I1120 20:31:34.684169   77804 host.go:66] Checking if "ha-025200-m04" exists ...
	I1120 20:31:34.684525   77804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:31:34.684574   77804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-025200-m04
	I1120 20:31:34.703337   77804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/ha-025200-m04/id_rsa Username:docker}
	I1120 20:31:34.796930   77804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:31:34.810007   77804 status.go:176] ha-025200-m04 status: &{Name:ha-025200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 node start m02 --alsologtostderr -v 5: (7.921252924s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 stop --alsologtostderr -v 5: (37.230328862s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 start --wait true --alsologtostderr -v 5
E1120 20:32:48.808293    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.312517    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.318965    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.330345    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.351795    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.393230    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.474723    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.636278    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:58.958097    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:32:59.600157    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:00.881572    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:03.444699    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:08.567013    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:16.508893    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:18.808755    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 start --wait true --alsologtostderr -v 5: (57.542509786s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 node delete m03 --alsologtostderr -v 5: (8.523504541s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 stop --alsologtostderr -v 5
E1120 20:33:39.290208    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 stop --alsologtostderr -v 5: (35.925984166s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5: exit status 7 (113.376538ms)

                                                
                                                
-- stdout --
	ha-025200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-025200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-025200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:34:06.243754   94097 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:34:06.243865   94097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:06.243873   94097 out.go:374] Setting ErrFile to fd 2...
	I1120 20:34:06.243877   94097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:06.244076   94097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:34:06.244242   94097 out.go:368] Setting JSON to false
	I1120 20:34:06.244273   94097 mustload.go:66] Loading cluster: ha-025200
	I1120 20:34:06.244337   94097 notify.go:221] Checking for updates...
	I1120 20:34:06.244958   94097 config.go:182] Loaded profile config "ha-025200": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:34:06.245134   94097 status.go:174] checking status of ha-025200 ...
	I1120 20:34:06.246244   94097 cli_runner.go:164] Run: docker container inspect ha-025200 --format={{.State.Status}}
	I1120 20:34:06.265287   94097 status.go:371] ha-025200 host status = "Stopped" (err=<nil>)
	I1120 20:34:06.265312   94097 status.go:384] host is not running, skipping remaining checks
	I1120 20:34:06.265319   94097 status.go:176] ha-025200 status: &{Name:ha-025200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:34:06.265357   94097 status.go:174] checking status of ha-025200-m02 ...
	I1120 20:34:06.265608   94097 cli_runner.go:164] Run: docker container inspect ha-025200-m02 --format={{.State.Status}}
	I1120 20:34:06.282571   94097 status.go:371] ha-025200-m02 host status = "Stopped" (err=<nil>)
	I1120 20:34:06.282593   94097 status.go:384] host is not running, skipping remaining checks
	I1120 20:34:06.282600   94097 status.go:176] ha-025200-m02 status: &{Name:ha-025200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:34:06.282621   94097 status.go:174] checking status of ha-025200-m04 ...
	I1120 20:34:06.282862   94097 cli_runner.go:164] Run: docker container inspect ha-025200-m04 --format={{.State.Status}}
	I1120 20:34:06.300565   94097 status.go:371] ha-025200-m04 host status = "Stopped" (err=<nil>)
	I1120 20:34:06.300586   94097 status.go:384] host is not running, skipping remaining checks
	I1120 20:34:06.300593   94097 status.go:176] ha-025200-m04 status: &{Name:ha-025200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1120 20:34:20.252426    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (55.387885335s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 node add --control-plane --alsologtostderr -v 5
E1120 20:35:42.175356    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-025200 node add --control-plane --alsologtostderr -v 5: (1m10.486417503s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-025200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-597686 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-597686 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (37.962474844s)
--- PASS: TestJSONOutput/start/Command (37.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-597686 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-597686 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-597686 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-597686 --output=json --user=testUser: (5.86220667s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-892396 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-892396 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.988218ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f800da9a-5b67-4d95-b478-c5728d9e82e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-892396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bde2c0a-29a1-4517-8b76-4b56ae876d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"88a8f0de-993e-4dbc-a855-689675e1df68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b95696cd-4f11-47fe-b285-ccbc9e7c57f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig"}}
	{"specversion":"1.0","id":"42dc80d9-d41c-4bfd-8ca0-d153eec1abaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube"}}
	{"specversion":"1.0","id":"fd4ef77a-3b19-4466-8454-9e80f8a5d01d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"640eb166-02aa-4145-84f5-18e38f53d8a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6ce7916-f77b-4949-8c35-9d0db7a67578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-892396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-892396
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-503759 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-503759 --network=: (26.077161014s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-503759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-503759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-503759: (2.13988927s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-130305 --network=bridge
E1120 20:37:48.808257    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:37:58.316811    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-130305 --network=bridge: (21.164292964s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-130305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-130305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-130305: (2.008971631s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                    
x
+
TestKicExistingNetwork (23.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1120 20:38:04.924812    7731 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1120 20:38:04.941925    7731 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1120 20:38:04.941999    7731 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1120 20:38:04.942023    7731 cli_runner.go:164] Run: docker network inspect existing-network
W1120 20:38:04.959839    7731 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1120 20:38:04.959866    7731 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1120 20:38:04.959877    7731 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1120 20:38:04.959995    7731 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1120 20:38:04.978327    7731 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5a901ca622c0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:53:dd:e9:bf:88} reservation:<nil>}
I1120 20:38:04.978684    7731 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee54a0}
I1120 20:38:04.978721    7731 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1120 20:38:04.978773    7731 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1120 20:38:05.024121    7731 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-142382 --network=existing-network
E1120 20:38:26.017161    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-142382 --network=existing-network: (21.607173993s)
helpers_test.go:175: Cleaning up "existing-network-142382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-142382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-142382: (1.96978404s)
I1120 20:38:28.618759    7731 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.71s)

                                                
                                    
x
+
TestKicCustomSubnet (24.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-151924 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-151924 --subnet=192.168.60.0/24: (22.187413851s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-151924 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-151924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-151924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-151924: (2.141341447s)
--- PASS: TestKicCustomSubnet (24.35s)

                                                
                                    
x
+
TestKicStaticIP (28.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-607269 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-607269 --static-ip=192.168.200.200: (25.922343334s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-607269 ip
helpers_test.go:175: Cleaning up "static-ip-607269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-607269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-607269: (2.109525239s)
--- PASS: TestKicStaticIP (28.18s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-834457 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-834457 --driver=docker  --container-runtime=containerd: (19.777766015s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-837232 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-837232 --driver=docker  --container-runtime=containerd: (22.078150348s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-834457
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-837232
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-837232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-837232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-837232: (1.936543609s)
helpers_test.go:175: Cleaning up "first-834457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-834457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-834457: (2.348818717s)
--- PASS: TestMinikubeProfile (47.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-763870 --memory=3072 --mount-string /tmp/TestMountStartserial2906935666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-763870 --memory=3072 --mount-string /tmp/TestMountStartserial2906935666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.644605165s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-763870 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-773831 --memory=3072 --mount-string /tmp/TestMountStartserial2906935666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-773831 --memory=3072 --mount-string /tmp/TestMountStartserial2906935666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.320704204s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-773831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-763870 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-763870 --alsologtostderr -v=5: (1.664200782s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-773831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-773831
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-773831: (1.253958458s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-773831
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-773831: (5.975667767s)
--- PASS: TestMountStart/serial/RestartStopped (6.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-773831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-898076 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-898076 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m1.716783782s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-898076 -- rollout status deployment/busybox: (2.640339548s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-ktzxv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-wq479 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-ktzxv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-wq479 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-ktzxv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-wq479 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-ktzxv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-ktzxv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-wq479 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-898076 -- exec busybox-7b57f96db7-wq479 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-898076 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-898076 -v=5 --alsologtostderr: (24.605955516s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-898076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp testdata/cp-test.txt multinode-898076:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844961847/001/cp-test_multinode-898076.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076:/home/docker/cp-test.txt multinode-898076-m02:/home/docker/cp-test_multinode-898076_multinode-898076-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test_multinode-898076_multinode-898076-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076:/home/docker/cp-test.txt multinode-898076-m03:/home/docker/cp-test_multinode-898076_multinode-898076-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test_multinode-898076_multinode-898076-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp testdata/cp-test.txt multinode-898076-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844961847/001/cp-test_multinode-898076-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m02:/home/docker/cp-test.txt multinode-898076:/home/docker/cp-test_multinode-898076-m02_multinode-898076.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test_multinode-898076-m02_multinode-898076.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m02:/home/docker/cp-test.txt multinode-898076-m03:/home/docker/cp-test_multinode-898076-m02_multinode-898076-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test_multinode-898076-m02_multinode-898076-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp testdata/cp-test.txt multinode-898076-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844961847/001/cp-test_multinode-898076-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m03:/home/docker/cp-test.txt multinode-898076:/home/docker/cp-test_multinode-898076-m03_multinode-898076.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076 "sudo cat /home/docker/cp-test_multinode-898076-m03_multinode-898076.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 cp multinode-898076-m03:/home/docker/cp-test.txt multinode-898076-m02:/home/docker/cp-test_multinode-898076-m03_multinode-898076-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 ssh -n multinode-898076-m02 "sudo cat /home/docker/cp-test_multinode-898076-m03_multinode-898076-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-898076 node stop m03: (1.257054868s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-898076 status: exit status 7 (483.27607ms)

                                                
                                                
-- stdout --
	multinode-898076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-898076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-898076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr: exit status 7 (483.775401ms)

                                                
                                                
-- stdout --
	multinode-898076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-898076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-898076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:42:17.910081  156507 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:42:17.910187  156507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:42:17.910195  156507 out.go:374] Setting ErrFile to fd 2...
	I1120 20:42:17.910198  156507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:42:17.910430  156507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:42:17.910570  156507 out.go:368] Setting JSON to false
	I1120 20:42:17.910597  156507 mustload.go:66] Loading cluster: multinode-898076
	I1120 20:42:17.910644  156507 notify.go:221] Checking for updates...
	I1120 20:42:17.910898  156507 config.go:182] Loaded profile config "multinode-898076": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:42:17.910909  156507 status.go:174] checking status of multinode-898076 ...
	I1120 20:42:17.911303  156507 cli_runner.go:164] Run: docker container inspect multinode-898076 --format={{.State.Status}}
	I1120 20:42:17.930024  156507 status.go:371] multinode-898076 host status = "Running" (err=<nil>)
	I1120 20:42:17.930058  156507 host.go:66] Checking if "multinode-898076" exists ...
	I1120 20:42:17.930426  156507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898076
	I1120 20:42:17.947899  156507 host.go:66] Checking if "multinode-898076" exists ...
	I1120 20:42:17.948137  156507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:42:17.948175  156507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898076
	I1120 20:42:17.965357  156507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/multinode-898076/id_rsa Username:docker}
	I1120 20:42:18.057757  156507 ssh_runner.go:195] Run: systemctl --version
	I1120 20:42:18.063961  156507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:42:18.075752  156507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:42:18.135314  156507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-20 20:42:18.125931474 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:42:18.135975  156507 kubeconfig.go:125] found "multinode-898076" server: "https://192.168.67.2:8443"
	I1120 20:42:18.136008  156507 api_server.go:166] Checking apiserver status ...
	I1120 20:42:18.136051  156507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:42:18.148191  156507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup
	W1120 20:42:18.156465  156507 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:42:18.156536  156507 ssh_runner.go:195] Run: ls
	I1120 20:42:18.160081  156507 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1120 20:42:18.164019  156507 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1120 20:42:18.164058  156507 status.go:463] multinode-898076 apiserver status = Running (err=<nil>)
	I1120 20:42:18.164078  156507 status.go:176] multinode-898076 status: &{Name:multinode-898076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:42:18.164099  156507 status.go:174] checking status of multinode-898076-m02 ...
	I1120 20:42:18.164319  156507 cli_runner.go:164] Run: docker container inspect multinode-898076-m02 --format={{.State.Status}}
	I1120 20:42:18.181855  156507 status.go:371] multinode-898076-m02 host status = "Running" (err=<nil>)
	I1120 20:42:18.181875  156507 host.go:66] Checking if "multinode-898076-m02" exists ...
	I1120 20:42:18.182135  156507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898076-m02
	I1120 20:42:18.198883  156507 host.go:66] Checking if "multinode-898076-m02" exists ...
	I1120 20:42:18.199152  156507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:42:18.199186  156507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898076-m02
	I1120 20:42:18.215809  156507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21923-3769/.minikube/machines/multinode-898076-m02/id_rsa Username:docker}
	I1120 20:42:18.307317  156507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:42:18.319345  156507 status.go:176] multinode-898076-m02 status: &{Name:multinode-898076-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:42:18.319401  156507 status.go:174] checking status of multinode-898076-m03 ...
	I1120 20:42:18.319648  156507 cli_runner.go:164] Run: docker container inspect multinode-898076-m03 --format={{.State.Status}}
	I1120 20:42:18.337010  156507 status.go:371] multinode-898076-m03 host status = "Stopped" (err=<nil>)
	I1120 20:42:18.337029  156507 status.go:384] host is not running, skipping remaining checks
	I1120 20:42:18.337037  156507 status.go:176] multinode-898076-m03 status: &{Name:multinode-898076-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-898076 node start m03 -v=5 --alsologtostderr: (6.131184535s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-898076
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-898076
E1120 20:42:48.808444    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-898076: (24.975464786s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-898076 --wait=true -v=5 --alsologtostderr
E1120 20:42:58.312540    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-898076 --wait=true -v=5 --alsologtostderr: (45.763594879s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-898076
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-898076 node delete m03: (4.599333788s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-898076 stop: (23.767330157s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-898076 status: exit status 7 (96.755282ms)

                                                
                                                
-- stdout --
	multinode-898076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-898076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr: exit status 7 (95.159074ms)

                                                
                                                
-- stdout --
	multinode-898076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-898076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:44:05.126821  166265 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:44:05.126938  166265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:44:05.126949  166265 out.go:374] Setting ErrFile to fd 2...
	I1120 20:44:05.126956  166265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:44:05.127135  166265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:44:05.127315  166265 out.go:368] Setting JSON to false
	I1120 20:44:05.127363  166265 mustload.go:66] Loading cluster: multinode-898076
	I1120 20:44:05.127468  166265 notify.go:221] Checking for updates...
	I1120 20:44:05.127754  166265 config.go:182] Loaded profile config "multinode-898076": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:44:05.127770  166265 status.go:174] checking status of multinode-898076 ...
	I1120 20:44:05.129291  166265 cli_runner.go:164] Run: docker container inspect multinode-898076 --format={{.State.Status}}
	I1120 20:44:05.148024  166265 status.go:371] multinode-898076 host status = "Stopped" (err=<nil>)
	I1120 20:44:05.148071  166265 status.go:384] host is not running, skipping remaining checks
	I1120 20:44:05.148076  166265 status.go:176] multinode-898076 status: &{Name:multinode-898076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:44:05.148111  166265 status.go:174] checking status of multinode-898076-m02 ...
	I1120 20:44:05.148360  166265 cli_runner.go:164] Run: docker container inspect multinode-898076-m02 --format={{.State.Status}}
	I1120 20:44:05.165787  166265 status.go:371] multinode-898076-m02 host status = "Stopped" (err=<nil>)
	I1120 20:44:05.165807  166265 status.go:384] host is not running, skipping remaining checks
	I1120 20:44:05.165814  166265 status.go:176] multinode-898076-m02 status: &{Name:multinode-898076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-898076 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1120 20:44:11.870871    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-898076 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.04562085s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-898076 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-898076
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-898076-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-898076-m02 --driver=docker  --container-runtime=containerd: exit status 14 (73.586681ms)

                                                
                                                
-- stdout --
	* [multinode-898076-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-898076-m02' is duplicated with machine name 'multinode-898076-m02' in profile 'multinode-898076'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-898076-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-898076-m03 --driver=docker  --container-runtime=containerd: (19.823338157s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-898076
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-898076: exit status 80 (290.339377ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-898076 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-898076-m03 already exists in multinode-898076-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-898076-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-898076-m03: (1.928984768s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.17s)

                                                
                                    
x
+
TestPreload (110.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-965738 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-965738 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.075806849s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-965738 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-965738 image pull gcr.io/k8s-minikube/busybox: (1.618069288s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-965738
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-965738: (6.658051968s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-965738 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-965738 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.295522979s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-965738 image list
helpers_test.go:175: Cleaning up "test-preload-965738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-965738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-965738: (2.403075646s)
--- PASS: TestPreload (110.28s)

                                                
                                    
x
+
TestScheduledStopUnix (96.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-147224 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-147224 --memory=3072 --driver=docker  --container-runtime=containerd: (19.847900477s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-147224 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:47:35.326490  184509 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:47:35.326793  184509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:47:35.326804  184509 out.go:374] Setting ErrFile to fd 2...
	I1120 20:47:35.326808  184509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:47:35.326991  184509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:47:35.327234  184509 out.go:368] Setting JSON to false
	I1120 20:47:35.327325  184509 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:47:35.327689  184509 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:47:35.327758  184509 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/config.json ...
	I1120 20:47:35.327936  184509 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:47:35.328027  184509 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-147224 -n scheduled-stop-147224
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-147224 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:47:35.715077  184663 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:47:35.715241  184663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:47:35.715251  184663 out.go:374] Setting ErrFile to fd 2...
	I1120 20:47:35.715258  184663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:47:35.715495  184663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:47:35.715733  184663 out.go:368] Setting JSON to false
	I1120 20:47:35.715923  184663 daemonize_unix.go:73] killing process 184546 as it is an old scheduled stop
	I1120 20:47:35.716029  184663 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:47:35.716425  184663 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:47:35.716520  184663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/config.json ...
	I1120 20:47:35.716736  184663 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:47:35.716879  184663 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1120 20:47:35.721766    7731 retry.go:31] will retry after 53.379µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.722945    7731 retry.go:31] will retry after 121.944µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.724094    7731 retry.go:31] will retry after 122.66µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.725226    7731 retry.go:31] will retry after 410.389µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.726355    7731 retry.go:31] will retry after 370.417µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.727557    7731 retry.go:31] will retry after 923.42µs: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.728673    7731 retry.go:31] will retry after 1.532084ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.730859    7731 retry.go:31] will retry after 1.164072ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.733070    7731 retry.go:31] will retry after 1.351059ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.735274    7731 retry.go:31] will retry after 4.20768ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.740490    7731 retry.go:31] will retry after 5.593545ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.746755    7731 retry.go:31] will retry after 10.316015ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.758012    7731 retry.go:31] will retry after 18.252418ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.777257    7731 retry.go:31] will retry after 16.37377ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
I1120 20:47:35.794518    7731 retry.go:31] will retry after 36.816617ms: open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-147224 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1120 20:47:48.808584    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/addons-775382/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:47:58.313891    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-147224 -n scheduled-stop-147224
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-147224
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-147224 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:48:01.582724  185561 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:48:01.583018  185561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:48:01.583030  185561 out.go:374] Setting ErrFile to fd 2...
	I1120 20:48:01.583037  185561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:48:01.583244  185561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:48:01.583521  185561 out.go:368] Setting JSON to false
	I1120 20:48:01.583625  185561 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:48:01.583940  185561 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:48:01.584024  185561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/scheduled-stop-147224/config.json ...
	I1120 20:48:01.584234  185561 mustload.go:66] Loading cluster: scheduled-stop-147224
	I1120 20:48:01.584357  185561 config.go:182] Loaded profile config "scheduled-stop-147224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-147224
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-147224: exit status 7 (77.955174ms)

                                                
                                                
-- stdout --
	scheduled-stop-147224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-147224 -n scheduled-stop-147224
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-147224 -n scheduled-stop-147224: exit status 7 (78.510693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-147224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-147224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-147224: (4.789271894s)
--- PASS: TestScheduledStopUnix (96.12s)

                                                
                                    
x
+
TestInsufficientStorage (9.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-253832 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-253832 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.16688572s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b1a5556-6d7c-4cf7-87b5-c5206629719c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-253832] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab76f99-7e61-4665-adea-553b0c795791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"759734d8-2fd9-49f8-81d0-3701d6308e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40fc0ceb-2d67-4731-8755-e1ea1682c288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig"}}
	{"specversion":"1.0","id":"0862dc7a-fa18-4544-8d69-3627a1f926aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube"}}
	{"specversion":"1.0","id":"f314583e-796e-4677-b4f0-7879b61cb297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"44d3a934-b736-4440-bbd9-d223b45029e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e842c256-4c6d-49fd-87da-50c6b4130c74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4acb1cae-059c-4fa6-93f2-d4c341e26e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5123bf0e-a735-4cf5-8caa-dfb4ef377884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e32a0181-d43a-4cba-8111-14d764450da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3564a6ed-303e-4f6e-8e96-99dac5f9f176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-253832\" primary control-plane node in \"insufficient-storage-253832\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa5a7213-9d6f-437d-880b-3aa950dee42b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d06d989c-a265-4592-9e1f-12db848197f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c03ce274-bdd4-49cc-9edd-b7e2605d302b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-253832 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-253832 --output=json --layout=cluster: exit status 7 (285.753066ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-253832","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-253832","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 20:48:58.977751  187828 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-253832" does not appear in /home/jenkins/minikube-integration/21923-3769/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-253832 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-253832 --output=json --layout=cluster: exit status 7 (283.90445ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-253832","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-253832","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 20:48:59.262659  187942 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-253832" does not appear in /home/jenkins/minikube-integration/21923-3769/kubeconfig
	E1120 20:48:59.272783  187942 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/insufficient-storage-253832/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-253832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-253832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-253832: (1.866257426s)
--- PASS: TestInsufficientStorage (9.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1662348241 start -p running-upgrade-191122 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1120 20:49:21.378962    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1662348241 start -p running-upgrade-191122 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m15.824450872s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-191122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-191122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.169791658s)
helpers_test.go:175: Cleaning up "running-upgrade-191122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-191122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-191122: (2.008344911s)
--- PASS: TestRunningBinaryUpgrade (100.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (323.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.986567776s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-902531
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-902531: (1.93624377s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-902531 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-902531 status --format={{.Host}}: exit status 7 (84.239568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.620538359s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-902531 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (83.869377ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-902531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-902531
	    minikube start -p kubernetes-upgrade-902531 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9025312 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-902531 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-902531 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.802391713s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-902531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-902531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-902531: (2.989120428s)
--- PASS: TestKubernetesUpgrade (323.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (81.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1117593898 start -p missing-upgrade-670521 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1117593898 start -p missing-upgrade-670521 --memory=3072 --driver=docker  --container-runtime=containerd: (29.560569534s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-670521
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-670521
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-670521 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-670521 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.347105614s)
helpers_test.go:175: Cleaning up "missing-upgrade-670521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-670521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-670521: (2.190772861s)
--- PASS: TestMissingContainerUpgrade (81.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestPause/serial/Start (47.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-051959 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-051959 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (47.135672273s)
--- PASS: TestPause/serial/Start (47.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.533823482 start -p stopped-upgrade-058944 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.533823482 start -p stopped-upgrade-058944 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m16.122177586s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.533823482 -p stopped-upgrade-058944 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.533823482 -p stopped-upgrade-058944 stop: (11.783509897s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-058944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-058944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.413352737s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-051959 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-051959 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.513881885s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.53s)

                                                
                                    
x
+
TestPause/serial/Pause (2.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-051959 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-051959 --alsologtostderr -v=5: (2.192821276s)
--- PASS: TestPause/serial/Pause (2.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-051959 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-051959 --output=json --layout=cluster: exit status 2 (378.839372ms)

                                                
                                                
-- stdout --
	{"Name":"pause-051959","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-051959","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-051959 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-051959 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-051959 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-051959 --alsologtostderr -v=5: (2.796175628s)
--- PASS: TestPause/serial/DeletePaused (2.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-051959
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-051959: exit status 1 (18.243235ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-051959: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (92.351958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-666907] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-666907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-666907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.133805763s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-666907 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-876657 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-876657 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (177.714757ms)

                                                
                                                
-- stdout --
	* [false-876657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:50:45.334414  215204 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:50:45.334679  215204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:50:45.334689  215204 out.go:374] Setting ErrFile to fd 2...
	I1120 20:50:45.334693  215204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:50:45.334858  215204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3769/.minikube/bin
	I1120 20:50:45.335307  215204 out.go:368] Setting JSON to false
	I1120 20:50:45.336462  215204 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1997,"bootTime":1763669848,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:50:45.336555  215204 start.go:143] virtualization: kvm guest
	I1120 20:50:45.339481  215204 out.go:179] * [false-876657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:50:45.340773  215204 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:50:45.340764  215204 notify.go:221] Checking for updates...
	I1120 20:50:45.346551  215204 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:50:45.347824  215204 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3769/kubeconfig
	I1120 20:50:45.349053  215204 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3769/.minikube
	I1120 20:50:45.350312  215204 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:50:45.351489  215204 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:50:45.353267  215204 config.go:182] Loaded profile config "NoKubernetes-666907": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:50:45.353435  215204 config.go:182] Loaded profile config "missing-upgrade-670521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1120 20:50:45.353580  215204 config.go:182] Loaded profile config "stopped-upgrade-058944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1120 20:50:45.353714  215204 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:50:45.377469  215204 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:50:45.377574  215204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:50:45.443763  215204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-20 20:50:45.432897142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:50:45.443899  215204 docker.go:319] overlay module found
	I1120 20:50:45.447259  215204 out.go:179] * Using the docker driver based on user configuration
	I1120 20:50:45.448516  215204 start.go:309] selected driver: docker
	I1120 20:50:45.448532  215204 start.go:930] validating driver "docker" against <nil>
	I1120 20:50:45.448544  215204 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:50:45.450149  215204 out.go:203] 
	W1120 20:50:45.451454  215204 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1120 20:50:45.452557  215204 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-876657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-670521
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-058944
contexts:
- context:
cluster: missing-upgrade-670521
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-670521
name: missing-upgrade-670521
- context:
cluster: stopped-upgrade-058944
user: stopped-upgrade-058944
name: stopped-upgrade-058944
current-context: ""
kind: Config
users:
- name: missing-upgrade-670521
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.key
- name: stopped-upgrade-058944
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-876657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-876657"

                                                
                                                
----------------------- debugLogs end: false-876657 [took: 5.659005705s] --------------------------------
helpers_test.go:175: Cleaning up "false-876657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-876657
--- PASS: TestNetworkPlugins/group/false (6.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.44783546s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-666907 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-666907 status -o json: exit status 2 (419.365253ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-666907","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-666907
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-666907: (2.131084931s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-058944
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-058944: (1.332930946s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-666907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.998688421s)
--- PASS: TestNoKubernetes/serial/Start (9.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21923-3769/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-666907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-666907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.250378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-666907
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-666907: (1.333488439s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-666907 --driver=docker  --container-runtime=containerd: (6.8850399s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-666907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-666907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.387678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.199053128s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.948173914s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-715005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-715005 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-715005 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-715005 --alsologtostderr -v=3: (12.021097486s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-715005 -n old-k8s-version-715005
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-715005 -n old-k8s-version-715005: exit status 7 (80.487401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-715005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-715005 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.937145905s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-715005 -n old-k8s-version-715005
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-480337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-480337 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-480337 --alsologtostderr -v=3: (12.086343689s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480337 -n no-preload-480337
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480337 -n no-preload-480337: exit status 7 (79.313816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-480337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.795193549s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480337 -n no-preload-480337
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-678zn" [efd10f37-7859-4a30-8a4a-a7c1e849934e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003521921s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-678zn" [efd10f37-7859-4a30-8a4a-a7c1e849934e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003407548s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-715005 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bqwpr" [e7b53df2-4e8b-49b1-8b02-ad8bf0b68635] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00324712s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-715005 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-715005 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-715005 -n old-k8s-version-715005
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-715005 -n old-k8s-version-715005: exit status 2 (317.35282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-715005 -n old-k8s-version-715005
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-715005 -n old-k8s-version-715005: exit status 2 (323.846054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-715005 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-715005 -n old-k8s-version-715005
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-715005 -n old-k8s-version-715005
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bqwpr" [e7b53df2-4e8b-49b1-8b02-ad8bf0b68635] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031388402s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-480337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (42.941568456s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480337 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-480337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480337 -n no-preload-480337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480337 -n no-preload-480337: exit status 2 (330.414622ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480337 -n no-preload-480337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480337 -n no-preload-480337: exit status 2 (347.338692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-480337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480337 -n no-preload-480337
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480337 -n no-preload-480337
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (40.611568788s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (28.934434179s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-439796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-439796 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-439796 --alsologtostderr -v=3: (1.405161018s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439796 -n newest-cni-439796
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439796 -n newest-cni-439796: exit status 7 (79.220652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-439796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-439796 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.084028172s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439796 -n newest-cni-439796
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-954820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-954820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-954820 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-954820 --alsologtostderr -v=3: (12.274373563s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-439796 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-439796 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439796 -n newest-cni-439796
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439796 -n newest-cni-439796: exit status 2 (322.728734ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439796 -n newest-cni-439796
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439796 -n newest-cni-439796: exit status 2 (318.563009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-439796 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439796 -n newest-cni-439796
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439796 -n newest-cni-439796
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (43.204631601s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-053182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-053182 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-053182 --alsologtostderr -v=3: (12.260711936s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-954820 -n embed-certs-954820
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-954820 -n embed-certs-954820: exit status 7 (87.814997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-954820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-954820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.965892101s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-954820 -n embed-certs-954820
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182: exit status 7 (101.914567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-053182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-053182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.131178504s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-876657 "pgrep -a kubelet"
I1120 20:55:59.058652    7731 config.go:182] Loaded profile config "auto-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8r5lh" [ae41f887-88bb-463d-a868-d630e9e8a10c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8r5lh" [ae41f887-88bb-463d-a868-d630e9e8a10c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.00415939s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g9kh8" [5967cb41-aada-452e-b1af-c5c84a2d9e60] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004892332s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7nv8c" [0ac47ad3-9090-42a2-aecb-dc5bfddc7dfb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003188052s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g9kh8" [5967cb41-aada-452e-b1af-c5c84a2d9e60] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004375435s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-954820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-954820 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7nv8c" [0ac47ad3-9090-42a2-aecb-dc5bfddc7dfb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003691639s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-053182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-954820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-954820 -n embed-certs-954820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-954820 -n embed-certs-954820: exit status 2 (333.637681ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-954820 -n embed-certs-954820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-954820 -n embed-certs-954820: exit status 2 (345.61655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-954820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-954820 -n embed-certs-954820
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-954820 -n embed-certs-954820
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (43.18376557s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-053182 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-053182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-053182 --alsologtostderr -v=1: (1.675866703s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182: exit status 2 (373.729603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182: exit status 2 (313.900657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-053182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053182 -n default-k8s-diff-port-053182
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)
E1120 20:58:06.531071    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:58:16.772631    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (54.219196468s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (48.619612158s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m13.377368748s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kd582" [07d92afb-b92c-438e-89f0-6c9b4d74302f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003825785s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-876657 "pgrep -a kubelet"
I1120 20:57:15.982555    7731 config.go:182] Loaded profile config "kindnet-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lwgbm" [b4805af4-d48f-426a-af7a-74eeb1e876b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lwgbm" [b4805af4-d48f-426a-af7a-74eeb1e876b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003009819s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-876657 "pgrep -a kubelet"
I1120 20:57:20.642912    7731 config.go:182] Loaded profile config "custom-flannel-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pwncs" [b04f444f-6665-4dfa-9b85-e567373c88d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pwncs" [b04f444f-6665-4dfa-9b85-e567373c88d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003700354s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pzddd" [9da1394c-9f2a-484a-86d8-af2d8d75941f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-pzddd" [9da1394c-9f2a-484a-86d8-af2d8d75941f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004378302s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-876657 "pgrep -a kubelet"
I1120 20:57:31.753142    7731 config.go:182] Loaded profile config "calico-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7rkjk" [01a5199c-82a4-40a4-beca-bc50f56e4748] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7rkjk" [01a5199c-82a4-40a4-beca-bc50f56e4748] Running
E1120 20:57:39.055295    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.061675    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.073036    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.094979    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.136447    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.217884    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.380156    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:39.702046    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:40.344207    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004239529s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (49.921032147s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-876657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m5.224702725s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-876657 "pgrep -a kubelet"
I1120 20:57:53.195202    7731 config.go:182] Loaded profile config "enable-default-cni-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fgz62" [68b9ed0e-62cf-44a8-945c-59560c5d53e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 20:57:56.270876    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.277487    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.288863    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.310252    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.352038    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fgz62" [68b9ed0e-62cf-44a8-945c-59560c5d53e9] Running
E1120 20:57:56.433999    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.595332    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:56.917338    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:57.558865    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:58.313150    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/functional-199012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:58.840467    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:59.550602    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004894098s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ck28g" [16ed9400-8b1c-47a5-b34a-3a0602fd16ba] Running
E1120 20:58:37.254818    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/no-preload-480337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003473885s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-876657 "pgrep -a kubelet"
I1120 20:58:42.178425    7731 config.go:182] Loaded profile config "flannel-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lxp8h" [a699f92c-4faa-4a6b-a1f0-539f347f010e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lxp8h" [a699f92c-4faa-4a6b-a1f0-539f347f010e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003209882s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-876657 "pgrep -a kubelet"
I1120 20:58:58.258363    7731 config.go:182] Loaded profile config "bridge-876657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-876657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s9tgb" [59589fe1-4cbc-4f50-9ca4-d55c47644445] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 20:59:00.994859    7731 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/old-k8s-version-715005/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-s9tgb" [59589fe1-4cbc-4f50-9ca4-d55c47644445] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003975872s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-876657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-876657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-311936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-311936
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-876657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-670521
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-058944
contexts:
- context:
cluster: missing-upgrade-670521
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-670521
name: missing-upgrade-670521
- context:
cluster: stopped-upgrade-058944
user: stopped-upgrade-058944
name: stopped-upgrade-058944
current-context: ""
kind: Config
users:
- name: missing-upgrade-670521
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.key
- name: stopped-upgrade-058944
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-876657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-876657"

                                                
                                                
----------------------- debugLogs end: kubenet-876657 [took: 3.402159419s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-876657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-876657
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-876657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-876657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-670521
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-058944
contexts:
- context:
cluster: missing-upgrade-670521
extensions:
- extension:
last-update: Thu, 20 Nov 2025 20:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-670521
name: missing-upgrade-670521
- context:
cluster: stopped-upgrade-058944
user: stopped-upgrade-058944
name: stopped-upgrade-058944
current-context: ""
kind: Config
users:
- name: missing-upgrade-670521
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/missing-upgrade-670521/client.key
- name: stopped-upgrade-058944
user:
client-certificate: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.crt
client-key: /home/jenkins/minikube-integration/21923-3769/.minikube/profiles/stopped-upgrade-058944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-876657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-876657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-876657"

                                                
                                                
----------------------- debugLogs end: cilium-876657 [took: 4.415961867s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-876657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-876657
--- SKIP: TestNetworkPlugins/group/cilium (4.58s)

                                                
                                    
Copied to clipboard