Test Report: Docker_Linux_containerd 21918

                    
                      08454a179ffa60c8ae500105aac58654b5cdef58:2025-11-19:42399
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 13.09
306 TestStartStop/group/no-preload/serial/DeployApp 12.19
327 TestStartStop/group/embed-certs/serial/DeployApp 12.69
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.61
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975700 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b49caea0-80e8-4473-ac1f-f9bd327c3754] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b49caea0-80e8-4473-ac1f-f9bd327c3754] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003206197s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975700 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-975700
helpers_test.go:243: (dbg) docker inspect old-k8s-version-975700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca",
	        "Created": "2025-11-19T22:19:38.284388499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:19:38.321569291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/hosts",
	        "LogPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca-json.log",
	        "Name": "/old-k8s-version-975700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-975700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-975700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca",
	                "LowerDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-975700",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-975700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-975700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-975700",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-975700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bdcc92270fe5f34f2b3211c596bcb03676f7d021d1ab19d1405cbc9ff65513fb",
	            "SandboxKey": "/var/run/docker/netns/bdcc92270fe5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-975700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e025fa4e3e969ab94188de7ccce8cf41b046fa1de9b7b2485f5bcca1daedd849",
	                    "EndpointID": "8cbfdb5bbf934780f84e734118116ddf815c2fea44670767c9e66317e265e4f4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:6b:48:9f:07:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-975700",
	                        "fa1d8405226b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975700 -n old-k8s-version-975700
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-975700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-975700 logs -n 25: (1.056627693s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cilium-904997 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo containerd config dump                                                                                                                                                                                                        │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo crio config                                                                                                                                                                                                                   │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ delete  │ -p cilium-904997                                                                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:18 UTC │
	│ start   │ -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ ssh     │ force-systemd-flag-635885 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p force-systemd-flag-635885                                                                                                                                                                                                                        │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ stop    │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ delete  │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ ssh     │ cert-options-071115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cert-options-071115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p cert-options-071115                                                                                                                                                                                                                              │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439         │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:19:48
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:19:48.990275  248121 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:19:48.990406  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990419  248121 out.go:374] Setting ErrFile to fd 2...
	I1119 22:19:48.990423  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990627  248121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:19:48.991193  248121 out.go:368] Setting JSON to false
	I1119 22:19:48.992321  248121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3729,"bootTime":1763587060,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:19:48.992426  248121 start.go:143] virtualization: kvm guest
	I1119 22:19:48.994475  248121 out.go:179] * [no-preload-638439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:19:48.995854  248121 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:19:48.995867  248121 notify.go:221] Checking for updates...
	I1119 22:19:48.998724  248121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:19:49.000141  248121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:19:49.004556  248121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:19:49.005782  248121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:19:49.006906  248121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:19:49.008438  248121 config.go:182] Loaded profile config "cert-expiration-207460": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008559  248121 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008672  248121 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:19:49.008773  248121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:19:49.032838  248121 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:19:49.032973  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.090138  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.078907682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.090254  248121 docker.go:319] overlay module found
	I1119 22:19:49.091878  248121 out.go:179] * Using the docker driver based on user configuration
	I1119 22:19:49.093038  248121 start.go:309] selected driver: docker
	I1119 22:19:49.093053  248121 start.go:930] validating driver "docker" against <nil>
	I1119 22:19:49.093064  248121 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:19:49.093625  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.156775  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.145211302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.157058  248121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:19:49.157439  248121 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:19:49.159270  248121 out.go:179] * Using Docker driver with root privileges
	I1119 22:19:49.160689  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:19:49.160762  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:49.160776  248121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:19:49.160859  248121 start.go:353] cluster config:
	{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:19:49.162538  248121 out.go:179] * Starting "no-preload-638439" primary control-plane node in "no-preload-638439" cluster
	I1119 22:19:49.165506  248121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:19:49.166733  248121 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:19:49.168220  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:49.168286  248121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:19:49.168353  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:49.168395  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json: {Name:mk80aa81bbdb1209c6edea855d376fb83f4d3158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:19:49.168457  248121 cache.go:107] acquiring lock: {Name:mk3047e241e868539f7fa71732db2494bd5accac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168492  248121 cache.go:107] acquiring lock: {Name:mkfa0cff605af699ff39a13e0c5b50d01194658e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168527  248121 cache.go:107] acquiring lock: {Name:mk97f6c43b208e1a8e4ae123374c490c517b3f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168548  248121 cache.go:115] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:19:49.168561  248121 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.881µs
	I1119 22:19:49.168577  248121 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:19:49.168586  248121 cache.go:107] acquiring lock: {Name:mk95307f4a2dfa9e7a1dbc92b6b01bf8659d9b13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168623  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.168652  248121 cache.go:107] acquiring lock: {Name:mk07d9df97c614ffb0affecc21609079d8bc04b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168677  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.168687  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.168749  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.169004  248121 cache.go:107] acquiring lock: {Name:mk5d2dd3f2b18e53fa90921f4c0c75406a912168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.169610  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.169116  248121 cache.go:107] acquiring lock: {Name:mkabd0eddb0cd66931eabcbabac2ddbe82464607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170495  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.169136  248121 cache.go:107] acquiring lock: {Name:mkc18e74e5d25fdb795ed308cf7ce3da142a9be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170703  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.171552  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.171558  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.171569  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.171576  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.172459  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.172478  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.172507  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.200114  248121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:19:49.200187  248121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:19:49.200226  248121 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:19:49.200265  248121 start.go:360] acquireMachinesLock for no-preload-638439: {Name:mk6b4dc7fd24c69d9288f594d61933b094ed5442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.200436  248121 start.go:364] duration metric: took 142.192µs to acquireMachinesLock for "no-preload-638439"
	I1119 22:19:49.200608  248121 start.go:93] Provisioning new machine with config: &{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:19:49.200727  248121 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:19:46.119049  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:46.119476  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:46.119522  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:46.119566  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:46.151572  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.151601  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.151607  216336 cri.go:89] found id: ""
	I1119 22:19:46.151617  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:46.151687  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.155958  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.160473  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:46.160530  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:46.191589  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.191612  216336 cri.go:89] found id: ""
	I1119 22:19:46.191619  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:46.191670  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.196383  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:46.196437  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:46.225509  216336 cri.go:89] found id: ""
	I1119 22:19:46.225529  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.225540  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:46.225546  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:46.225599  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:46.254866  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.254913  216336 cri.go:89] found id: ""
	I1119 22:19:46.254924  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:46.254979  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.259701  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:46.259765  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:46.292564  216336 cri.go:89] found id: ""
	I1119 22:19:46.292591  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.292601  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:46.292608  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:46.292667  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:46.329564  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.329596  216336 cri.go:89] found id: ""
	I1119 22:19:46.329606  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:46.329667  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.335222  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:46.335276  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:46.367004  216336 cri.go:89] found id: ""
	I1119 22:19:46.367028  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.367039  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:46.367047  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:46.367105  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:46.399927  216336 cri.go:89] found id: ""
	I1119 22:19:46.399974  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.399984  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:46.400002  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:46.400017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:46.463044  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:46.463068  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:46.463083  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.497691  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:46.497718  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.535424  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:46.535455  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.575124  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:46.575154  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:46.607742  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:46.607769  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:46.710299  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:46.710332  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:46.724051  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:46.724080  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.762457  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:46.762489  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.803568  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:46.803601  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:49.354660  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:49.355043  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:49.355109  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:49.355169  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:49.395681  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:49.395705  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.395709  216336 cri.go:89] found id: ""
	I1119 22:19:49.395716  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:49.395781  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.403424  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.410799  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:49.410949  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:49.452918  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:49.452941  216336 cri.go:89] found id: ""
	I1119 22:19:49.452952  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:49.453011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.458252  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:49.458323  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:49.497813  216336 cri.go:89] found id: ""
	I1119 22:19:49.497837  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.497855  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:49.497863  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:49.497929  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:49.533334  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:49.533350  216336 cri.go:89] found id: ""
	I1119 22:19:49.533357  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:49.533399  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.537784  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:49.537858  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:49.568018  216336 cri.go:89] found id: ""
	I1119 22:19:49.568044  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.568056  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:49.568063  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:49.568119  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:49.609525  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:49.609556  216336 cri.go:89] found id: ""
	I1119 22:19:49.609566  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:49.609626  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.616140  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:49.616211  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:49.655231  216336 cri.go:89] found id: ""
	I1119 22:19:49.655262  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.655272  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:49.655279  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:49.655333  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:49.689095  216336 cri.go:89] found id: ""
	I1119 22:19:49.689153  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.689165  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:49.689184  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:49.689221  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:49.810665  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:49.810701  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:49.901949  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:49.901999  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:49.902017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.959095  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:49.959128  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:50.003553  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:50.003592  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:50.058586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:50.058623  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:50.074307  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:50.074340  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:50.111045  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:50.111081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:50.150599  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:50.150632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:50.185189  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:50.185216  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:48.204748  244005 out.go:252]   - Booting up control plane ...
	I1119 22:19:48.204897  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:19:48.205005  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:19:48.206240  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:19:48.231808  244005 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:19:48.232853  244005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:19:48.232929  244005 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:19:48.338373  244005 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:19:49.203330  248121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:19:49.203668  248121 start.go:159] libmachine.API.Create for "no-preload-638439" (driver="docker")
	I1119 22:19:49.203755  248121 client.go:173] LocalClient.Create starting
	I1119 22:19:49.203905  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem
	I1119 22:19:49.203977  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204016  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204103  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem
	I1119 22:19:49.204159  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204190  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204684  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:19:49.233073  248121 cli_runner.go:211] docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:19:49.233150  248121 network_create.go:284] running [docker network inspect no-preload-638439] to gather additional debugging logs...
	I1119 22:19:49.233181  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439
	W1119 22:19:49.260692  248121 cli_runner.go:211] docker network inspect no-preload-638439 returned with exit code 1
	I1119 22:19:49.260724  248121 network_create.go:287] error running [docker network inspect no-preload-638439]: docker network inspect no-preload-638439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-638439 not found
	I1119 22:19:49.260740  248121 network_create.go:289] output of [docker network inspect no-preload-638439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-638439 not found
	
	** /stderr **
	I1119 22:19:49.260835  248121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:49.281699  248121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
	I1119 22:19:49.282496  248121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-474134d72c89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:14:41:ce:21:e4} reservation:<nil>}
	I1119 22:19:49.283428  248121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-527206f47d61 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:ef:fd:4c:e4:1b} reservation:<nil>}
	I1119 22:19:49.284394  248121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac16fd64007f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:dc:21:09:78:e5} reservation:<nil>}
	I1119 22:19:49.285073  248121 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11547e9c7cf3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:49:21:10:91:74} reservation:<nil>}
	I1119 22:19:49.286118  248121 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e025fa4e3e96 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:19:71:ce:4a:3c} reservation:<nil>}
	I1119 22:19:49.287275  248121 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e92190}
	I1119 22:19:49.287353  248121 network_create.go:124] attempt to create docker network no-preload-638439 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1119 22:19:49.287448  248121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-638439 no-preload-638439
	I1119 22:19:49.349621  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:49.349748  248121 network_create.go:108] docker network no-preload-638439 192.168.103.0/24 created
	I1119 22:19:49.349780  248121 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-638439" container
	I1119 22:19:49.349859  248121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:19:49.350149  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:49.361305  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:49.363150  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:49.375619  248121 cli_runner.go:164] Run: docker volume create no-preload-638439 --label name.minikube.sigs.k8s.io=no-preload-638439 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:19:49.389385  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:49.396358  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:49.402036  248121 oci.go:103] Successfully created a docker volume no-preload-638439
	I1119 22:19:49.402119  248121 cli_runner.go:164] Run: docker run --rm --name no-preload-638439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --entrypoint /usr/bin/test -v no-preload-638439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:19:49.404338  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:49.471774  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:19:49.471808  248121 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 303.216742ms
	I1119 22:19:49.471832  248121 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:19:49.854076  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:19:49.854102  248121 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 685.635122ms
	I1119 22:19:49.854114  248121 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:19:49.969965  248121 oci.go:107] Successfully prepared a docker volume no-preload-638439
	I1119 22:19:49.970027  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1119 22:19:49.970211  248121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:19:49.970251  248121 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:19:49.970298  248121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:19:50.046746  248121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-638439 --name no-preload-638439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-638439 --network no-preload-638439 --ip 192.168.103.2 --volume no-preload-638439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:19:50.374513  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Running}}
	I1119 22:19:50.397354  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.420153  248121 cli_runner.go:164] Run: docker exec no-preload-638439 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:19:50.480826  248121 oci.go:144] the created container "no-preload-638439" has a running status.
	I1119 22:19:50.480855  248121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa...
	I1119 22:19:50.741014  248121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:19:50.777653  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.805773  248121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:19:50.805802  248121 kic_runner.go:114] Args: [docker exec --privileged no-preload-638439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:19:50.864742  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.878812  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:19:50.878846  248121 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.709887948s
	I1119 22:19:50.878866  248121 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:19:50.883024  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:19:50.883052  248121 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.714530905s
	I1119 22:19:50.883067  248121 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:19:50.889090  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:19:50.889119  248121 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.72053761s
	I1119 22:19:50.889134  248121 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:19:50.890545  248121 machine.go:94] provisionDockerMachine start ...
	I1119 22:19:50.890654  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:50.917029  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:50.917372  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:50.917394  248121 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:19:50.918143  248121 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41082->127.0.0.1:33063: read: connection reset by peer
	I1119 22:19:50.954753  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:19:50.954786  248121 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.785730546s
	I1119 22:19:50.954801  248121 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:19:51.295575  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:19:51.295602  248121 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.126530323s
	I1119 22:19:51.295614  248121 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:19:51.295629  248121 cache.go:87] Successfully saved all images to host disk.
	I1119 22:19:53.340728  244005 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002509 seconds
	I1119 22:19:53.340920  244005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:19:53.353852  244005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:19:53.877436  244005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:19:53.877630  244005 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-975700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:19:54.388156  244005 kubeadm.go:319] [bootstrap-token] Using token: cb0uuv.ole7whobrm4tnmeu
	I1119 22:19:54.389814  244005 out.go:252]   - Configuring RBAC rules ...
	I1119 22:19:54.389996  244005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:19:54.396226  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:19:54.404040  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:19:54.407336  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:19:54.410095  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:19:54.412761  244005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:19:54.424912  244005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:19:54.627091  244005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:19:54.803149  244005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:19:54.807538  244005 kubeadm.go:319] 
	I1119 22:19:54.807624  244005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:19:54.807631  244005 kubeadm.go:319] 
	I1119 22:19:54.807719  244005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:19:54.807724  244005 kubeadm.go:319] 
	I1119 22:19:54.807753  244005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:19:54.807821  244005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:19:54.807898  244005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:19:54.807905  244005 kubeadm.go:319] 
	I1119 22:19:54.807968  244005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:19:54.807973  244005 kubeadm.go:319] 
	I1119 22:19:54.808037  244005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:19:54.808042  244005 kubeadm.go:319] 
	I1119 22:19:54.808105  244005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:19:54.808197  244005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:19:54.808278  244005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:19:54.808283  244005 kubeadm.go:319] 
	I1119 22:19:54.808378  244005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:19:54.808482  244005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:19:54.808488  244005 kubeadm.go:319] 
	I1119 22:19:54.808581  244005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.808697  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:19:54.808745  244005 kubeadm.go:319] 	--control-plane 
	I1119 22:19:54.808753  244005 kubeadm.go:319] 
	I1119 22:19:54.808860  244005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:19:54.808867  244005 kubeadm.go:319] 
	I1119 22:19:54.808978  244005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.809119  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:19:54.812703  244005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:19:54.812825  244005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:19:54.812852  244005 cni.go:84] Creating CNI manager for ""
	I1119 22:19:54.812906  244005 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:54.814910  244005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:19:52.733247  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:52.733770  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:52.733821  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:52.733900  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:52.766790  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:52.766819  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:52.766824  216336 cri.go:89] found id: ""
	I1119 22:19:52.766834  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:52.766917  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.771725  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.776283  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:52.776357  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:52.808152  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:52.808179  216336 cri.go:89] found id: ""
	I1119 22:19:52.808190  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:52.808260  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.812851  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:52.812954  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:52.844459  216336 cri.go:89] found id: ""
	I1119 22:19:52.844483  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.844492  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:52.844499  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:52.844560  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:52.875911  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:52.875939  216336 cri.go:89] found id: ""
	I1119 22:19:52.875948  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:52.876008  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.880449  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:52.880526  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:52.913101  216336 cri.go:89] found id: ""
	I1119 22:19:52.913139  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.913150  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:52.913158  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:52.913240  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:52.945143  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:52.945172  216336 cri.go:89] found id: ""
	I1119 22:19:52.945182  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:52.945240  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.949921  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:52.950006  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:52.984180  216336 cri.go:89] found id: ""
	I1119 22:19:52.984214  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.984225  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:52.984233  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:52.984296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:53.016636  216336 cri.go:89] found id: ""
	I1119 22:19:53.016661  216336 logs.go:282] 0 containers: []
	W1119 22:19:53.016671  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:53.016691  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:53.016707  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:53.053700  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:53.053730  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:53.088889  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:53.088922  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:53.104350  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:53.104378  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:53.165418  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:53.165442  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:53.165460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:53.197214  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:53.197252  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:53.228109  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:53.228145  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:53.261694  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:53.261727  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:53.302850  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:53.302891  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:53.333442  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:53.333466  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:54.046074  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.046106  248121 ubuntu.go:182] provisioning hostname "no-preload-638439"
	I1119 22:19:54.046172  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.065777  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.066044  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.066060  248121 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-638439 && echo "no-preload-638439" | sudo tee /etc/hostname
	I1119 22:19:54.204707  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.204779  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.223401  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.223669  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.223696  248121 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-638439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-638439/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-638439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:19:54.352178  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:19:54.352206  248121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:19:54.352222  248121 ubuntu.go:190] setting up certificates
	I1119 22:19:54.352230  248121 provision.go:84] configureAuth start
	I1119 22:19:54.352301  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.371286  248121 provision.go:143] copyHostCerts
	I1119 22:19:54.371354  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:19:54.371370  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:19:54.371451  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:19:54.371570  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:19:54.371582  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:19:54.371623  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:19:54.371701  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:19:54.371710  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:19:54.371748  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:19:54.371818  248121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.no-preload-638439 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-638439]
	I1119 22:19:54.471021  248121 provision.go:177] copyRemoteCerts
	I1119 22:19:54.471092  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:19:54.471126  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.492235  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.594331  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:19:54.619378  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:19:54.640347  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:19:54.663269  248121 provision.go:87] duration metric: took 311.007703ms to configureAuth
	I1119 22:19:54.663306  248121 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:19:54.663514  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:54.663528  248121 machine.go:97] duration metric: took 3.772952055s to provisionDockerMachine
	I1119 22:19:54.663538  248121 client.go:176] duration metric: took 5.459757711s to LocalClient.Create
	I1119 22:19:54.663558  248121 start.go:167] duration metric: took 5.459889493s to libmachine.API.Create "no-preload-638439"
	I1119 22:19:54.663572  248121 start.go:293] postStartSetup for "no-preload-638439" (driver="docker")
	I1119 22:19:54.663584  248121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:19:54.663643  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:19:54.663702  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.693309  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.794533  248121 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:19:54.799614  248121 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:19:54.799652  248121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:19:54.799667  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:19:54.799750  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:19:54.799853  248121 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:19:54.800010  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:19:54.811703  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:19:54.833815  248121 start.go:296] duration metric: took 170.228401ms for postStartSetup
	I1119 22:19:54.834269  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.855648  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:54.855997  248121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:54.856065  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.875839  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.971298  248121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:19:54.976558  248121 start.go:128] duration metric: took 5.775804384s to createHost
	I1119 22:19:54.976584  248121 start.go:83] releasing machines lock for "no-preload-638439", held for 5.775996243s
	I1119 22:19:54.976652  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.996323  248121 ssh_runner.go:195] Run: cat /version.json
	I1119 22:19:54.996379  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.996397  248121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:19:54.996468  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:55.015498  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.015796  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.110222  248121 ssh_runner.go:195] Run: systemctl --version
	I1119 22:19:55.167157  248121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:19:55.172373  248121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:19:55.172445  248121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:19:55.200823  248121 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:19:55.200849  248121 start.go:496] detecting cgroup driver to use...
	I1119 22:19:55.200917  248121 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:19:55.200971  248121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:19:55.216429  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:19:55.230198  248121 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:19:55.230259  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:19:55.247760  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:19:55.266193  248121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:19:55.355176  248121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:19:55.456550  248121 docker.go:234] disabling docker service ...
	I1119 22:19:55.456609  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:19:55.479653  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:19:55.493533  248121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:19:55.592560  248121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:19:55.702080  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:19:55.719351  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:19:55.735307  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:19:55.748222  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:19:55.759552  248121 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:19:55.759604  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:19:55.771633  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.782179  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:19:55.791940  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.801486  248121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:19:55.810671  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:19:55.820637  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:19:55.830057  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:19:55.839605  248121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:19:55.847930  248121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:19:55.856300  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:19:55.943868  248121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:19:56.031481  248121 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:19:56.031555  248121 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:19:56.036560  248121 start.go:564] Will wait 60s for crictl version
	I1119 22:19:56.036619  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.040772  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:19:56.068661  248121 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:19:56.068728  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.092486  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.118002  248121 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:19:54.816277  244005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:19:54.820558  244005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:19:54.820581  244005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:19:54.833857  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:19:55.525202  244005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:19:55.525370  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:55.525485  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-975700 minikube.k8s.io/updated_at=2025_11_19T22_19_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-975700 minikube.k8s.io/primary=true
	I1119 22:19:55.543472  244005 ops.go:34] apiserver oom_adj: -16
	I1119 22:19:55.632765  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.133706  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.632860  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:57.133046  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.119594  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:56.139074  248121 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:19:56.143662  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:19:56.156640  248121 kubeadm.go:884] updating cluster {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:19:56.156774  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:56.156835  248121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:19:56.185228  248121 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:19:56.185258  248121 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 22:19:56.185326  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.185359  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.185391  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.185403  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.185415  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.185453  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.185334  248121 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.185400  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.186856  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.186874  248121 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.187070  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.187094  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.187129  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.187150  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.332716  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1119 22:19:56.332783  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.332809  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1119 22:19:56.332864  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.335699  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1119 22:19:56.335755  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.343400  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1119 22:19:56.343484  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.354423  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1119 22:19:56.354489  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.357606  248121 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1119 22:19:56.357630  248121 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1119 22:19:56.357659  248121 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.357662  248121 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.359708  248121 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1119 22:19:56.359750  248121 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.359792  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.365141  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1119 22:19:56.365211  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1119 22:19:56.370262  248121 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1119 22:19:56.370317  248121 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.370368  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.380909  248121 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1119 22:19:56.380976  248121 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.381006  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.381021  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.381050  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.381079  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.387736  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1119 22:19:56.387826  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.388049  248121 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1119 22:19:56.388093  248121 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.388134  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.388139  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.388097  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.419491  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.419632  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.422653  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.424802  248121 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1119 22:19:56.424851  248121 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.424918  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.426559  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.426657  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.426745  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.457323  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.459754  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.459823  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.459928  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.464385  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.464524  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.464526  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.499739  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:56.499837  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:56.504038  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:56.504120  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:19:56.504047  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.504087  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:56.504256  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:56.507722  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:56.507817  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:19:56.507959  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.508035  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 22:19:56.508064  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1119 22:19:56.508205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:56.508348  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.515236  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 22:19:56.515270  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1119 22:19:56.555985  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 22:19:56.556025  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1119 22:19:56.556078  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.556101  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 22:19:56.556122  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1119 22:19:56.571156  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 22:19:56.571205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:56.571220  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1119 22:19:56.571322  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.646952  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:56.646960  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 22:19:56.646995  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1119 22:19:56.647066  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:19:56.713984  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 22:19:56.714047  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1119 22:19:56.738791  248121 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.738923  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.888282  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1119 22:19:56.888324  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.888394  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:57.461211  248121 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1119 22:19:57.461286  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982686  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.094253154s)
	I1119 22:19:57.982716  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 22:19:57.982712  248121 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1119 22:19:57.982738  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982764  248121 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982789  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982801  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:58.943228  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 22:19:58.943276  248121 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943321  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943326  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:55.919868  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:55.920354  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:55.920400  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:55.920445  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:55.949031  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:55.949059  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:55.949065  216336 cri.go:89] found id: ""
	I1119 22:19:55.949074  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:55.949133  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.953108  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.957378  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:55.957442  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:55.987066  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:55.987094  216336 cri.go:89] found id: ""
	I1119 22:19:55.987104  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:55.987165  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.991215  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:55.991296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:56.020982  216336 cri.go:89] found id: ""
	I1119 22:19:56.021011  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.021022  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:56.021031  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:56.021093  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:56.051114  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.051138  216336 cri.go:89] found id: ""
	I1119 22:19:56.051147  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:56.051210  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.056071  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:56.056142  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:56.085375  216336 cri.go:89] found id: ""
	I1119 22:19:56.085398  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.085405  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:56.085414  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:56.085457  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:56.114914  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.114941  216336 cri.go:89] found id: ""
	I1119 22:19:56.114951  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:56.115011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.119718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:56.119785  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:56.148992  216336 cri.go:89] found id: ""
	I1119 22:19:56.149019  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.149029  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:56.149037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:56.149096  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:56.179135  216336 cri.go:89] found id: ""
	I1119 22:19:56.179163  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.179173  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:56.179190  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:56.179204  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:56.216379  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:56.216409  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.252073  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:56.252103  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:56.283542  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:56.283567  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:56.381327  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:56.381359  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:56.399981  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:56.400019  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:56.493857  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:56.493894  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:56.493913  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.537441  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:56.537473  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:56.590041  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:56.590076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:56.633876  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:56.633925  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.179328  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:59.179856  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:59.179947  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:59.180012  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:59.213304  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.213329  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.213336  216336 cri.go:89] found id: ""
	I1119 22:19:59.213346  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:59.213410  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.218953  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.223649  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:59.223722  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:59.256070  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.256133  216336 cri.go:89] found id: ""
	I1119 22:19:59.256144  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:59.256211  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.261436  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:59.261514  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:59.294827  216336 cri.go:89] found id: ""
	I1119 22:19:59.294854  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.294864  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:59.294871  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:59.294944  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:59.328052  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.328078  216336 cri.go:89] found id: ""
	I1119 22:19:59.328087  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:59.328148  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.333661  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:59.333745  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:59.367498  216336 cri.go:89] found id: ""
	I1119 22:19:59.367525  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.367534  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:59.367543  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:59.367601  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:59.401843  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.401868  216336 cri.go:89] found id: ""
	I1119 22:19:59.401877  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:59.401982  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.406399  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:59.406473  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:59.437867  216336 cri.go:89] found id: ""
	I1119 22:19:59.437948  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.437957  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:59.437963  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:59.438041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:59.465826  216336 cri.go:89] found id: ""
	I1119 22:19:59.465856  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.465866  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:59.465905  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:59.465953  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:59.498633  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:59.498670  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:59.586643  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:59.586677  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:59.602123  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:59.602148  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:59.668657  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:59.668675  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:59.668702  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.705026  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:59.705060  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.741520  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:59.741550  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.780920  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:59.780952  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.819532  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:59.819572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.861394  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:59.861428  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:57.633270  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.133177  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.633156  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.133958  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.632816  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.133904  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.633510  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.132810  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.632963  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:02.132866  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.209856  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.266503638s)
	I1119 22:20:00.209924  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 22:20:00.209943  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.266589504s)
	I1119 22:20:00.209953  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:00.210022  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:00.210039  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:01.315659  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.105588091s)
	I1119 22:20:01.315688  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 22:20:01.315709  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315726  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.105675845s)
	I1119 22:20:01.315757  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315796  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:02.564406  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.248612967s)
	I1119 22:20:02.564435  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 22:20:02.564452  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.248631025s)
	I1119 22:20:02.564470  248121 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564502  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 22:20:02.564519  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564590  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:02.568829  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 22:20:02.568862  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 22:20:02.417703  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:02.418103  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:02.418159  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:02.418203  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:02.450244  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.450266  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.450271  216336 cri.go:89] found id: ""
	I1119 22:20:02.450280  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:02.450336  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.455477  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.460188  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:02.460263  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:02.491317  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.491341  216336 cri.go:89] found id: ""
	I1119 22:20:02.491351  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:02.491409  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.495754  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:02.495837  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:02.526395  216336 cri.go:89] found id: ""
	I1119 22:20:02.526421  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.526433  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:02.526441  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:02.526509  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:02.556596  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:02.556619  216336 cri.go:89] found id: ""
	I1119 22:20:02.556629  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:02.556686  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.561029  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:02.561102  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:02.593442  216336 cri.go:89] found id: ""
	I1119 22:20:02.593468  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.593480  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:02.593488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:02.593547  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:02.626155  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.626181  216336 cri.go:89] found id: ""
	I1119 22:20:02.626191  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:02.626239  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.630831  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:02.630910  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:02.663060  216336 cri.go:89] found id: ""
	I1119 22:20:02.663088  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.663098  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:02.663106  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:02.663159  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:02.692104  216336 cri.go:89] found id: ""
	I1119 22:20:02.692132  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.692142  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:02.692159  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:02.692172  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.730157  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:02.730198  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:02.764408  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:02.764435  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:02.871409  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:02.871460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.912737  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:02.912778  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.958177  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:02.958229  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:03.003908  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:03.003950  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:03.062041  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:03.062076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:03.080938  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:03.080972  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:03.153154  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:03.153177  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:03.153191  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.633509  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.132907  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.633598  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.133836  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.632911  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.133740  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.633397  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.133422  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.633053  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.133122  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.632971  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.709877  244005 kubeadm.go:1114] duration metric: took 12.184544724s to wait for elevateKubeSystemPrivileges
	I1119 22:20:07.709929  244005 kubeadm.go:403] duration metric: took 23.328681682s to StartCluster
	I1119 22:20:07.709949  244005 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.710024  244005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:07.711281  244005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.726769  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:07.726909  244005 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:07.727036  244005 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:20:07.727028  244005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:07.727107  244005 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727154  244005 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-975700"
	I1119 22:20:07.727201  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.727269  244005 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727331  244005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-975700"
	I1119 22:20:07.727652  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.727759  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.759624  244005 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:07.760449  244005 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-975700"
	I1119 22:20:07.760487  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.760848  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.781264  244005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.781292  244005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:07.781358  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.790624  244005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:07.790705  244005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.805293  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.811125  244005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:07.811152  244005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:07.811221  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.839037  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.927378  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.930474  244005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.930565  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:07.945012  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:08.325616  244005 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:08.326981  244005 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:08.525071  244005 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:20:05.409665  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.845117956s)
	I1119 22:20:05.409701  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:20:05.409742  248121 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.409813  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.827105  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:20:05.827149  248121 cache_images.go:125] Successfully loaded all cached images
	I1119 22:20:05.827155  248121 cache_images.go:94] duration metric: took 9.641883158s to LoadCachedImages
	I1119 22:20:05.827169  248121 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1119 22:20:05.827281  248121 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-638439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:20:05.827350  248121 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:20:05.854538  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:05.854565  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:05.854580  248121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:20:05.854605  248121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-638439 NodeName:no-preload-638439 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:20:05.854728  248121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-638439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:20:05.854794  248121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.863483  248121 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 22:20:05.863536  248121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.871942  248121 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 22:20:05.871968  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 22:20:05.871947  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 22:20:05.872035  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 22:20:05.876399  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 22:20:05.876433  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 22:20:07.043592  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:07.058665  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 22:20:07.063097  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 22:20:07.063136  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 22:20:07.259328  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 22:20:07.263904  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 22:20:07.263944  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 22:20:07.467537  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:20:07.476103  248121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:20:07.489039  248121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:20:07.504456  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1119 22:20:07.517675  248121 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:20:07.521966  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:20:07.532448  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.616669  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.647854  248121 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439 for IP: 192.168.103.2
	I1119 22:20:07.647911  248121 certs.go:195] generating shared ca certs ...
	I1119 22:20:07.647941  248121 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.648100  248121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:20:07.648156  248121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:20:07.648169  248121 certs.go:257] generating profile certs ...
	I1119 22:20:07.648233  248121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key
	I1119 22:20:07.648249  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt with IP's: []
	I1119 22:20:08.248835  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt ...
	I1119 22:20:08.248872  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: {Name:mk71551595bc691ff029aa4f22d8136d735c86c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249095  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key ...
	I1119 22:20:08.249107  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key: {Name:mk7714d393e738013c7abe0f1689bcf490e26b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249250  248121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff
	I1119 22:20:08.249267  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:20:09.018572  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff ...
	I1119 22:20:09.018603  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff: {Name:mk1a2db3ea3ff5c82c4c822f2131fbadbd39c724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018790  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff ...
	I1119 22:20:09.018808  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff: {Name:mk13f089d71bdc7abee8608285249f8ab5ad14b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018926  248121 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt
	I1119 22:20:09.019033  248121 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key
	I1119 22:20:09.019118  248121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key
	I1119 22:20:09.019145  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt with IP's: []
	I1119 22:20:09.141320  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt ...
	I1119 22:20:09.141353  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt: {Name:mke73db150d5fe88961c2b7ca7e43e6cb8c1e87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141532  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key ...
	I1119 22:20:09.141550  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key: {Name:mk65b56a4bcd9d60fdf62f046abf7a5abe0e729f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141750  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:20:09.141799  248121 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:20:09.141812  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:20:09.141845  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:20:09.141894  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:20:09.141928  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:20:09.141984  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:20:09.142554  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:20:09.161569  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:20:09.180990  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:20:09.199264  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:20:09.217135  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:20:09.236364  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:20:09.255084  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:20:09.274604  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:20:09.293451  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:20:09.315834  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:20:09.336567  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:20:09.354248  248121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:20:09.367868  248121 ssh_runner.go:195] Run: openssl version
	I1119 22:20:09.374260  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:20:09.383332  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387801  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387864  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.424342  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:20:09.433605  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:20:09.442478  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446634  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446694  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.481876  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:20:09.491181  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:20:09.499823  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.503986  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.504043  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.539481  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:20:09.548630  248121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:20:09.552649  248121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:20:09.552709  248121 kubeadm.go:401] StartCluster: {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:20:09.552800  248121 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:20:09.552841  248121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:20:09.580504  248121 cri.go:89] found id: ""
	I1119 22:20:09.580577  248121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:20:09.588825  248121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:20:09.597263  248121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:20:09.597312  248121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:20:09.605431  248121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:20:09.605448  248121 kubeadm.go:158] found existing configuration files:
	
	I1119 22:20:09.605505  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:20:09.613580  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:20:09.613647  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:20:09.621432  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:20:09.629381  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:20:09.629444  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:20:09.637498  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.645457  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:20:09.645500  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.653775  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:20:09.662581  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:20:09.662631  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:20:09.670267  248121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:20:09.705969  248121 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:20:09.706049  248121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:20:09.725461  248121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:20:09.725557  248121 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:20:09.725619  248121 kubeadm.go:319] OS: Linux
	I1119 22:20:09.725688  248121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:20:09.725759  248121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:20:09.725823  248121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:20:09.725926  248121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:20:09.726011  248121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:20:09.726090  248121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:20:09.726169  248121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:20:09.726247  248121 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:20:09.785631  248121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:20:09.785785  248121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:20:09.785930  248121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:20:09.790816  248121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:20:05.698391  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:08.526183  244005 addons.go:515] duration metric: took 799.151282ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:20:08.830648  244005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-975700" context rescaled to 1 replicas
	W1119 22:20:10.330548  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:12.330688  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:09.792948  248121 out.go:252]   - Generating certificates and keys ...
	I1119 22:20:09.793051  248121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:20:09.793149  248121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:20:10.738826  248121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:20:10.908170  248121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:20:11.291841  248121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:20:11.623960  248121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:20:11.828384  248121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:20:11.828565  248121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.233215  248121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:20:12.233354  248121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.358552  248121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:20:12.567027  248121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:20:12.649341  248121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:20:12.649468  248121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:20:12.821942  248121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:20:13.184331  248121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:20:13.249251  248121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:20:13.507036  248121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:20:13.992391  248121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:20:13.992949  248121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:20:14.073515  248121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:20:10.699588  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:20:10.699656  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:10.699719  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:10.736721  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:10.736747  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:10.736753  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:10.736758  216336 cri.go:89] found id: ""
	I1119 22:20:10.736767  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:10.736834  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.742155  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.747306  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.752281  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:10.752356  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:10.785664  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:10.785691  216336 cri.go:89] found id: ""
	I1119 22:20:10.785700  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:10.785758  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.791037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:10.791107  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:10.827690  216336 cri.go:89] found id: ""
	I1119 22:20:10.827736  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.827749  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:10.827781  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:10.827856  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:10.860463  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:10.860489  216336 cri.go:89] found id: ""
	I1119 22:20:10.860499  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:10.860557  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.865818  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:10.865902  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:10.896395  216336 cri.go:89] found id: ""
	I1119 22:20:10.896425  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.896457  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:10.896464  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:10.896524  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:10.927065  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:10.927091  216336 cri.go:89] found id: ""
	I1119 22:20:10.927100  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:10.927157  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.931718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:10.931789  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:10.960849  216336 cri.go:89] found id: ""
	I1119 22:20:10.960892  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.960903  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:10.960910  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:10.960962  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:10.993029  216336 cri.go:89] found id: ""
	I1119 22:20:10.993057  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.993067  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:10.993080  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:10.993094  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:11.027974  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:11.028010  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:11.062086  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:11.062120  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:11.103210  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:11.103250  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:11.145837  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:11.145872  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:11.199841  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:11.199937  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:11.236586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:11.236618  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:11.253432  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:11.253487  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:11.295903  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:11.295943  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:11.337708  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:11.337745  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:11.452249  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:11.452285  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:14.830008  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:16.830268  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:14.075591  248121 out.go:252]   - Booting up control plane ...
	I1119 22:20:14.075701  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:20:14.075795  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:20:14.076511  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:20:14.092600  248121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:20:14.092767  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:20:14.099651  248121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:20:14.099786  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:20:14.099865  248121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:20:14.205620  248121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:20:14.205784  248121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:20:14.707136  248121 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.67843ms
	I1119 22:20:14.711176  248121 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:20:14.711406  248121 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:20:14.711556  248121 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:20:14.711669  248121 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:20:16.370429  248121 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.659105526s
	I1119 22:20:16.919263  248121 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.208262146s
	I1119 22:20:18.712413  248121 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001122323s
	I1119 22:20:18.724319  248121 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:20:18.734195  248121 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:20:18.743489  248121 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:20:18.743707  248121 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-638439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:20:18.749843  248121 kubeadm.go:319] [bootstrap-token] Using token: tkvbyg.4blpqvlc8c0koqab
	I1119 22:20:18.751541  248121 out.go:252]   - Configuring RBAC rules ...
	I1119 22:20:18.751647  248121 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:20:18.754347  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:20:18.760461  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:20:18.763019  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:20:18.765434  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:20:18.768021  248121 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:20:19.119568  248121 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:20:19.537037  248121 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:20:20.119469  248121 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:20:20.120399  248121 kubeadm.go:319] 
	I1119 22:20:20.120467  248121 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:20:20.120472  248121 kubeadm.go:319] 
	I1119 22:20:20.120605  248121 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:20:20.120632  248121 kubeadm.go:319] 
	I1119 22:20:20.120661  248121 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:20:20.120719  248121 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:20:20.120765  248121 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:20:20.120772  248121 kubeadm.go:319] 
	I1119 22:20:20.120845  248121 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:20:20.120857  248121 kubeadm.go:319] 
	I1119 22:20:20.121004  248121 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:20:20.121029  248121 kubeadm.go:319] 
	I1119 22:20:20.121103  248121 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:20:20.121207  248121 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:20:20.121271  248121 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:20:20.121297  248121 kubeadm.go:319] 
	I1119 22:20:20.121444  248121 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:20:20.121523  248121 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:20:20.121533  248121 kubeadm.go:319] 
	I1119 22:20:20.121611  248121 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.121712  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:20:20.121734  248121 kubeadm.go:319] 	--control-plane 
	I1119 22:20:20.121738  248121 kubeadm.go:319] 
	I1119 22:20:20.121810  248121 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:20:20.121816  248121 kubeadm.go:319] 
	I1119 22:20:20.121927  248121 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.122034  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:20:20.124555  248121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:20:20.124740  248121 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:20:20.124773  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:20.124786  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:20.127350  248121 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1119 22:20:19.330624  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:21.830427  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:22.330516  244005 node_ready.go:49] node "old-k8s-version-975700" is "Ready"
	I1119 22:20:22.330545  244005 node_ready.go:38] duration metric: took 14.003533581s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:22.330557  244005 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:20:22.330607  244005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:20:22.343206  244005 api_server.go:72] duration metric: took 14.6162161s to wait for apiserver process to appear ...
	I1119 22:20:22.343236  244005 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:20:22.343259  244005 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:20:22.347053  244005 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 22:20:22.348151  244005 api_server.go:141] control plane version: v1.28.0
	I1119 22:20:22.348175  244005 api_server.go:131] duration metric: took 4.933094ms to wait for apiserver health ...
	I1119 22:20:22.348183  244005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:20:22.351821  244005 system_pods.go:59] 8 kube-system pods found
	I1119 22:20:22.351849  244005 system_pods.go:61] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.351854  244005 system_pods.go:61] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.351860  244005 system_pods.go:61] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.351864  244005 system_pods.go:61] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.351869  244005 system_pods.go:61] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.351873  244005 system_pods.go:61] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.351877  244005 system_pods.go:61] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.351892  244005 system_pods.go:61] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.351898  244005 system_pods.go:74] duration metric: took 3.709193ms to wait for pod list to return data ...
	I1119 22:20:22.351906  244005 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:20:22.353863  244005 default_sa.go:45] found service account: "default"
	I1119 22:20:22.353906  244005 default_sa.go:55] duration metric: took 1.968518ms for default service account to be created ...
	I1119 22:20:22.353917  244005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:20:22.356763  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.356787  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.356792  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.356799  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.356803  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.356810  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.356813  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.356817  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.356822  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.356838  244005 retry.go:31] will retry after 295.130955ms: missing components: kube-dns
	I1119 22:20:20.128552  248121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:20:20.133893  248121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:20:20.133928  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:20:20.148247  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:20:20.366418  248121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:20:20.366472  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.366530  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-638439 minikube.k8s.io/updated_at=2025_11_19T22_20_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=no-preload-638439 minikube.k8s.io/primary=true
	I1119 22:20:20.472760  248121 ops.go:34] apiserver oom_adj: -16
	I1119 22:20:20.472956  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.973815  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.473583  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.973622  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.473704  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.973336  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.473849  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.973455  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.472997  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.537110  248121 kubeadm.go:1114] duration metric: took 4.170685845s to wait for elevateKubeSystemPrivileges
	I1119 22:20:24.537150  248121 kubeadm.go:403] duration metric: took 14.984446293s to StartCluster
	I1119 22:20:24.537173  248121 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.537243  248121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:24.539105  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.539319  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:24.539342  248121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:24.539397  248121 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:24.539519  248121 addons.go:70] Setting storage-provisioner=true in profile "no-preload-638439"
	I1119 22:20:24.539532  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:20:24.539540  248121 addons.go:239] Setting addon storage-provisioner=true in "no-preload-638439"
	I1119 22:20:24.539552  248121 addons.go:70] Setting default-storageclass=true in profile "no-preload-638439"
	I1119 22:20:24.539577  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.539588  248121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-638439"
	I1119 22:20:24.539936  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.540134  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.541288  248121 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:24.543039  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:24.564207  248121 addons.go:239] Setting addon default-storageclass=true in "no-preload-638439"
	I1119 22:20:24.564253  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.564597  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.564680  248121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:24.568527  248121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.568546  248121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:24.568596  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.597385  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.599498  248121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.599523  248121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:24.599582  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.624046  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.628608  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:24.684697  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:24.711970  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.742786  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.836401  248121 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:24.837864  248121 node_ready.go:35] waiting up to 6m0s for node "no-preload-638439" to be "Ready" ...
	I1119 22:20:25.026785  248121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:20:21.527976  216336 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.075664087s)
	W1119 22:20:21.528025  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1119 22:20:24.028516  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:22.657454  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.657490  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.657499  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.657508  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.657513  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.657520  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.657526  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.657534  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.657541  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.657562  244005 retry.go:31] will retry after 290.603952ms: missing components: kube-dns
	I1119 22:20:22.951933  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.951963  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.951969  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.951974  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.951978  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.951983  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.951988  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.951992  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.951996  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:22.952009  244005 retry.go:31] will retry after 460.674944ms: missing components: kube-dns
	I1119 22:20:23.417271  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.417302  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:23.417309  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.417314  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.417320  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.417326  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.417331  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.417336  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.417341  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.417365  244005 retry.go:31] will retry after 513.116078ms: missing components: kube-dns
	I1119 22:20:23.935257  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.935284  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Running
	I1119 22:20:23.935290  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.935294  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.935297  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.935301  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.935304  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.935308  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.935311  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.935318  244005 system_pods.go:126] duration metric: took 1.581396028s to wait for k8s-apps to be running ...
	I1119 22:20:23.935324  244005 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:20:23.935362  244005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:23.948529  244005 system_svc.go:56] duration metric: took 13.192475ms WaitForService to wait for kubelet
	I1119 22:20:23.948562  244005 kubeadm.go:587] duration metric: took 16.221575338s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:20:23.948584  244005 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:20:23.951344  244005 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:20:23.951368  244005 node_conditions.go:123] node cpu capacity is 8
	I1119 22:20:23.951381  244005 node_conditions.go:105] duration metric: took 2.792615ms to run NodePressure ...
	I1119 22:20:23.951394  244005 start.go:242] waiting for startup goroutines ...
	I1119 22:20:23.951400  244005 start.go:247] waiting for cluster config update ...
	I1119 22:20:23.951411  244005 start.go:256] writing updated cluster config ...
	I1119 22:20:23.951671  244005 ssh_runner.go:195] Run: rm -f paused
	I1119 22:20:23.955724  244005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:23.960403  244005 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.964724  244005 pod_ready.go:94] pod "coredns-5dd5756b68-8hdh7" is "Ready"
	I1119 22:20:23.964745  244005 pod_ready.go:86] duration metric: took 4.323941ms for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.969212  244005 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.977143  244005 pod_ready.go:94] pod "etcd-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.977172  244005 pod_ready.go:86] duration metric: took 7.932702ms for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.984279  244005 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.990403  244005 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.990436  244005 pod_ready.go:86] duration metric: took 6.116437ms for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.994759  244005 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.360199  244005 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-975700" is "Ready"
	I1119 22:20:24.360227  244005 pod_ready.go:86] duration metric: took 365.436099ms for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.562023  244005 pod_ready.go:83] waiting for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.960397  244005 pod_ready.go:94] pod "kube-proxy-rnxxf" is "Ready"
	I1119 22:20:24.960428  244005 pod_ready.go:86] duration metric: took 398.37739ms for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.161533  244005 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.560960  244005 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-975700" is "Ready"
	I1119 22:20:25.560992  244005 pod_ready.go:86] duration metric: took 399.43384ms for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.561003  244005 pod_ready.go:40] duration metric: took 1.605243985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:25.605359  244005 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:20:25.607589  244005 out.go:203] 
	W1119 22:20:25.608986  244005 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:20:25.610519  244005 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:20:25.612224  244005 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-975700" cluster and "default" namespace by default
	I1119 22:20:25.028260  248121 addons.go:515] duration metric: took 488.871855ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:20:25.340186  248121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-638439" context rescaled to 1 replicas
	W1119 22:20:26.840695  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:28.841182  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:26.041396  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42420->192.168.76.2:8443: read: connection reset by peer
	I1119 22:20:26.041468  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:26.041590  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:26.074121  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.074147  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.074156  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.074161  216336 cri.go:89] found id: ""
	I1119 22:20:26.074169  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:26.074227  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.080252  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.086170  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.090514  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:26.090588  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:26.119338  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.119365  216336 cri.go:89] found id: ""
	I1119 22:20:26.119375  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:26.119431  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.123237  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:26.123308  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:26.150429  216336 cri.go:89] found id: ""
	I1119 22:20:26.150465  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.150475  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:26.150488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:26.150553  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:26.180127  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.180150  216336 cri.go:89] found id: ""
	I1119 22:20:26.180167  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:26.180222  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.185074  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:26.185141  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:26.216334  216336 cri.go:89] found id: ""
	I1119 22:20:26.216362  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.216373  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:26.216381  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:26.216440  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:26.246928  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:26.246952  216336 cri.go:89] found id: ""
	I1119 22:20:26.246962  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:26.247027  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.252210  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:26.252281  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:26.283008  216336 cri.go:89] found id: ""
	I1119 22:20:26.283052  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.283086  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:26.283101  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:26.283160  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:26.311983  216336 cri.go:89] found id: ""
	I1119 22:20:26.312016  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.312026  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:26.312040  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:26.312059  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:26.372080  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:26.372108  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:26.372123  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.410125  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:26.410156  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.445052  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:26.445081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:26.488314  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:26.488348  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:26.519759  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:26.519786  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:26.607720  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:26.607753  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:26.622164  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:26.622196  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.658569  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:26.658598  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.690380  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:26.690410  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.723334  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:26.723368  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.254435  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:29.254927  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:29.254988  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:29.255050  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:29.281477  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.281503  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.281509  216336 cri.go:89] found id: ""
	I1119 22:20:29.281518  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:29.281576  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.285991  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.289786  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:29.289841  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:29.315177  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.315199  216336 cri.go:89] found id: ""
	I1119 22:20:29.315208  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:29.315264  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.319376  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:29.319444  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:29.346951  216336 cri.go:89] found id: ""
	I1119 22:20:29.346973  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.346980  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:29.346998  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:29.347043  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:29.374529  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.374549  216336 cri.go:89] found id: ""
	I1119 22:20:29.374556  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:29.374608  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.378833  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:29.378918  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:29.409418  216336 cri.go:89] found id: ""
	I1119 22:20:29.409456  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.409468  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:29.409476  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:29.409542  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:29.439747  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.439767  216336 cri.go:89] found id: ""
	I1119 22:20:29.439775  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:29.439832  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.443967  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:29.444041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:29.469669  216336 cri.go:89] found id: ""
	I1119 22:20:29.469695  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.469705  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:29.469712  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:29.469769  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:29.496972  216336 cri.go:89] found id: ""
	I1119 22:20:29.497000  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.497009  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:29.497026  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:29.497039  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:29.585833  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:29.585865  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:29.600450  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:29.600488  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.634599  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:29.634632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:29.694751  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:29.694785  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:29.694799  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.728982  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:29.729009  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.762543  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:29.762572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.794342  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:29.794374  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.828582  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:29.828610  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:29.874642  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:29.874672  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1119 22:20:31.341227  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:33.840869  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d5768828ca04f       56cc512116c8f       7 seconds ago       Running             busybox                   0                   36bf64ba3c00d       busybox                                          default
	dcb27a5492378       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   6a75c4192812f       coredns-5dd5756b68-8hdh7                         kube-system
	537c778c87f9d       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   8fa22b8d20a3f       storage-provisioner                              kube-system
	9f637c51ffa43       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   cb55d544de2ea       kindnet-mlzfc                                    kube-system
	bfde9418adc9d       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   4ca7d14c5d50a       kube-proxy-rnxxf                                 kube-system
	814e6989c6431       f6f496300a2ae       46 seconds ago      Running             kube-scheduler            0                   f5ceb3a12bb84       kube-scheduler-old-k8s-version-975700            kube-system
	1870cf3b3c44b       bb5e0dde9054c       46 seconds ago      Running             kube-apiserver            0                   52831c15e2557       kube-apiserver-old-k8s-version-975700            kube-system
	97883579e01ac       73deb9a3f7025       46 seconds ago      Running             etcd                      0                   e63e84e034d31       etcd-old-k8s-version-975700                      kube-system
	f4532683638eb       4be79c38a4bab       46 seconds ago      Running             kube-controller-manager   0                   250cc7adfeba7       kube-controller-manager-old-k8s-version-975700   kube-system
	
	
	==> containerd <==
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.712366614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8hdh7,Uid:a4057bf2-fe2e-42db-83e9-bc625724c61c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.715553681Z" level=info msg="CreateContainer within sandbox \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.722344581Z" level=info msg="Container dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.728923728Z" level=info msg="CreateContainer within sandbox \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.729475146Z" level=info msg="StartContainer for \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.730499329Z" level=info msg="connecting to shim dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf" address="unix:///run/containerd/s/34a674b328f7f600d36cfd77d784cd14517a5b33bcc634daaca7b6dd09032aa9" protocol=ttrpc version=3
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.757547812Z" level=info msg="StartContainer for \"537c778c87f9d8c20894001938b5632c0e5dcc6b1095fb4d266fd4b3995811b2\" returns successfully"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.786711759Z" level=info msg="StartContainer for \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\" returns successfully"
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.134603361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b49caea0-80e8-4473-ac1f-f9bd327c3754,Namespace:default,Attempt:0,}"
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.185916874Z" level=info msg="connecting to shim 36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0" address="unix:///run/containerd/s/c0d7613134ce7e47335ad17357d4a66a2ab52af6386e2abf7c0d2ac536b7f638" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.262497493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b49caea0-80e8-4473-ac1f-f9bd327c3754,Namespace:default,Attempt:0,} returns sandbox id \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\""
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.264162086Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.373146514Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.374074587Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.375650212Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378263887Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378735365Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.114534001s"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378776793Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.380562536Z" level=info msg="CreateContainer within sandbox \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.389100774Z" level=info msg="Container d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.397616150Z" level=info msg="CreateContainer within sandbox \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.398260870Z" level=info msg="StartContainer for \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.399512803Z" level=info msg="connecting to shim d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a" address="unix:///run/containerd/s/c0d7613134ce7e47335ad17357d4a66a2ab52af6386e2abf7c0d2ac536b7f638" protocol=ttrpc version=3
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.458456492Z" level=info msg="StartContainer for \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\" returns successfully"
	Nov 19 22:20:34 old-k8s-version-975700 containerd[666]: E1119 22:20:34.905114     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48436 - 61 "HINFO IN 2387730691433537035.6546186387081931462. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.161284203s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-975700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-975700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-975700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_19_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:19:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-975700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:20:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:20:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-975700
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                3fcee5dd-d370-4209-8cfb-b52e4110e73b
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-8hdh7                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-975700                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-mlzfc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-975700             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-975700    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-rnxxf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-975700             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-975700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-975700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-975700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-975700 event: Registered Node old-k8s-version-975700 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-975700 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [97883579e01acd8bc2695b07f55c948f3a46c160bf534f88de73606eaba10069] <==
	{"level":"info","ts":"2025-11-19T22:19:49.465492Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-19T22:19:49.465528Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-19T22:19:50.345522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.346237Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.346786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:19:50.346778Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-975700 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:19:50.346819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:19:50.34703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.347114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:19:50.347198Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:19:50.347172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.347229Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.34807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:19:50.348559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-11-19T22:19:52.006287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.664484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-pcqkfx5qiyeeley4bpw5zibjhu\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T22:19:52.0064Z","caller":"traceutil/trace.go:171","msg":"trace[898828708] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-pcqkfx5qiyeeley4bpw5zibjhu; range_end:; response_count:0; response_revision:69; }","duration":"208.799616ms","start":"2025-11-19T22:19:51.797579Z","end":"2025-11-19T22:19:52.006378Z","steps":["trace[898828708] 'range keys from in-memory index tree'  (duration: 208.571934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:20:07.925909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.040627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2025-11-19T22:20:07.925985Z","caller":"traceutil/trace.go:171","msg":"trace[1355111703] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"124.145953ms","start":"2025-11-19T22:20:07.801823Z","end":"2025-11-19T22:20:07.925969Z","steps":["trace[1355111703] 'range keys from in-memory index tree'  (duration: 123.893977ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:20:07.945114Z","caller":"traceutil/trace.go:171","msg":"trace[986567943] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"142.590181ms","start":"2025-11-19T22:20:07.802499Z","end":"2025-11-19T22:20:07.945089Z","steps":["trace[986567943] 'process raft request'  (duration: 142.419431ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:20:36 up  1:02,  0 user,  load average: 4.39, 3.37, 2.10
	Linux old-k8s-version-975700 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f637c51ffa434a826f6584d8a7faf4701e1f09be3a0f36a1d28e02a37c6fb8d] <==
	I1119 22:20:11.957590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:20:11.957822       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 22:20:11.958041       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:20:11.958058       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:20:11.958074       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:20:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:20:12.159373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:20:12.159514       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:20:12.159531       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:20:12.159716       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:20:12.538063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:20:12.538126       1 metrics.go:72] Registering metrics
	I1119 22:20:12.538374       1 controller.go:711] "Syncing nftables rules"
	I1119 22:20:22.164952       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:20:22.165012       1 main.go:301] handling current node
	I1119 22:20:32.161088       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:20:32.161124       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1870cf3b3c44ba81df1590d986f8a70efb48ac5a464f0a3d4d757b18fc420709] <==
	I1119 22:19:51.591405       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:19:51.591414       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:19:51.591407       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:19:51.591438       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:19:51.591387       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:19:51.593118       1 controller.go:624] quota admission added evaluator for: namespaces
	E1119 22:19:51.595601       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 22:19:51.608554       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:19:52.008399       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:19:52.497067       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:19:52.500707       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:19:52.500727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:19:52.938966       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:19:52.979169       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:19:53.101027       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:19:53.107157       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 22:19:53.108241       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:19:53.112503       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:19:53.552446       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:19:54.613121       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:19:54.625563       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:19:54.635960       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:20:06.459115       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:20:07.162080       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:07.162080       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f4532683638eb7620857fe45f4fd3c3ed09ef48600c71e8fb4fb0f9dae88bfb2] <==
	I1119 22:20:06.563934       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.565627       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.565755       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.609574       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:20:06.927535       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:20:07.000472       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:20:07.000512       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:20:07.173283       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rnxxf"
	I1119 22:20:07.176815       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mlzfc"
	I1119 22:20:07.368445       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vbfhv"
	I1119 22:20:07.377915       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8hdh7"
	I1119 22:20:07.385341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="921.876981ms"
	I1119 22:20:07.403436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.02637ms"
	I1119 22:20:07.403590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97µs"
	I1119 22:20:08.346162       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:20:08.357372       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vbfhv"
	I1119 22:20:08.366742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.01104ms"
	I1119 22:20:08.373376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.555995ms"
	I1119 22:20:08.373523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.519µs"
	I1119 22:20:22.284386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.689µs"
	I1119 22:20:22.302759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.916µs"
	I1119 22:20:23.804590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.984643ms"
	I1119 22:20:23.825468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.454615ms"
	I1119 22:20:23.825553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.257µs"
	I1119 22:20:26.560333       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [bfde9418adc9d7aba501fe3c84086b7de3e6632fdd8aabb2eb31e57c6302f8a1] <==
	I1119 22:20:08.542091       1 server_others.go:69] "Using iptables proxy"
	I1119 22:20:08.554521       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1119 22:20:08.579485       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:20:08.581958       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:20:08.581998       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:20:08.582008       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:20:08.582058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:20:08.582375       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:20:08.582389       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:20:08.584350       1 config.go:315] "Starting node config controller"
	I1119 22:20:08.584377       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:20:08.584426       1 config.go:188] "Starting service config controller"
	I1119 22:20:08.584459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:20:08.584486       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:20:08.584491       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:20:08.684578       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:20:08.684601       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:20:08.684577       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [814e6989c64319d934f5f210646b29c75985c3fe82e3642066c6cced56537e32] <==
	W1119 22:19:51.558017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:51.558302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:51.557982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 22:19:51.558323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 22:19:51.558217       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:51.558365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.378035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.378068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.502983       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:19:52.503017       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:19:52.577347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:19:52.577387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:19:52.620635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:19:52.620663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:19:52.621642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.621673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.622811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.622838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.655572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 22:19:52.655637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 22:19:52.670809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.670851       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.738351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:19:52.738419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 22:19:55.553708       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254431    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2532f4d-a32b-45a0-b846-1d2ecea1f926-lib-modules\") pod \"kindnet-mlzfc\" (UID: \"e2532f4d-a32b-45a0-b846-1d2ecea1f926\") " pod="kube-system/kindnet-mlzfc"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254510    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnz9\" (UniqueName: \"kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254561    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2532f4d-a32b-45a0-b846-1d2ecea1f926-cni-cfg\") pod \"kindnet-mlzfc\" (UID: \"e2532f4d-a32b-45a0-b846-1d2ecea1f926\") " pod="kube-system/kindnet-mlzfc"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254783    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-proxy\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254836    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-xtables-lock\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363793    1560 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363834    1560 projected.go:198] Error preparing data for projected volume kube-api-access-rpv66 for pod kube-system/kindnet-mlzfc: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363943    1560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2532f4d-a32b-45a0-b846-1d2ecea1f926-kube-api-access-rpv66 podName:e2532f4d-a32b-45a0-b846-1d2ecea1f926 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:07.863913255 +0000 UTC m=+13.276094662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rpv66" (UniqueName: "kubernetes.io/projected/e2532f4d-a32b-45a0-b846-1d2ecea1f926-kube-api-access-rpv66") pod "kindnet-mlzfc" (UID: "e2532f4d-a32b-45a0-b846-1d2ecea1f926") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364286    1560 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364311    1560 projected.go:198] Error preparing data for projected volume kube-api-access-9fnz9 for pod kube-system/kube-proxy-rnxxf: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364372    1560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9 podName:f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d nodeName:}" failed. No retries permitted until 2025-11-19 22:20:07.864353345 +0000 UTC m=+13.276534732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9fnz9" (UniqueName: "kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9") pod "kube-proxy-rnxxf" (UID: "f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:08 old-k8s-version-975700 kubelet[1560]: I1119 22:20:08.753381    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rnxxf" podStartSLOduration=1.753327393 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:08.753080476 +0000 UTC m=+14.165261906" watchObservedRunningTime="2025-11-19 22:20:08.753327393 +0000 UTC m=+14.165508800"
	Nov 19 22:20:12 old-k8s-version-975700 kubelet[1560]: I1119 22:20:12.861606    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mlzfc" podStartSLOduration=2.782502482 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="2025-11-19 22:20:08.564687803 +0000 UTC m=+13.976869202" lastFinishedPulling="2025-11-19 22:20:11.643733018 +0000 UTC m=+17.055914418" observedRunningTime="2025-11-19 22:20:12.861400313 +0000 UTC m=+18.273581719" watchObservedRunningTime="2025-11-19 22:20:12.861547698 +0000 UTC m=+18.273729104"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.261744    1560 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.283141    1560 topology_manager.go:215] "Topology Admit Handler" podUID="6c937194-8889-47a0-b05f-7af799e18044" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.284839    1560 topology_manager.go:215] "Topology Admit Handler" podUID="a4057bf2-fe2e-42db-83e9-bc625724c61c" podNamespace="kube-system" podName="coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.465780    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjsb\" (UniqueName: \"kubernetes.io/projected/6c937194-8889-47a0-b05f-7af799e18044-kube-api-access-xbjsb\") pod \"storage-provisioner\" (UID: \"6c937194-8889-47a0-b05f-7af799e18044\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.465975    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7zm\" (UniqueName: \"kubernetes.io/projected/a4057bf2-fe2e-42db-83e9-bc625724c61c-kube-api-access-zd7zm\") pod \"coredns-5dd5756b68-8hdh7\" (UID: \"a4057bf2-fe2e-42db-83e9-bc625724c61c\") " pod="kube-system/coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.466031    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c937194-8889-47a0-b05f-7af799e18044-tmp\") pod \"storage-provisioner\" (UID: \"6c937194-8889-47a0-b05f-7af799e18044\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.466065    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4057bf2-fe2e-42db-83e9-bc625724c61c-config-volume\") pod \"coredns-5dd5756b68-8hdh7\" (UID: \"a4057bf2-fe2e-42db-83e9-bc625724c61c\") " pod="kube-system/coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.790518    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.790461437 podCreationTimestamp="2025-11-19 22:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:22.789226683 +0000 UTC m=+28.201408091" watchObservedRunningTime="2025-11-19 22:20:22.790461437 +0000 UTC m=+28.202642846"
	Nov 19 22:20:23 old-k8s-version-975700 kubelet[1560]: I1119 22:20:23.794502    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8hdh7" podStartSLOduration=16.794448045 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:23.792204756 +0000 UTC m=+29.204386163" watchObservedRunningTime="2025-11-19 22:20:23.794448045 +0000 UTC m=+29.206629453"
	Nov 19 22:20:25 old-k8s-version-975700 kubelet[1560]: I1119 22:20:25.822716    1560 topology_manager.go:215] "Topology Admit Handler" podUID="b49caea0-80e8-4473-ac1f-f9bd327c3754" podNamespace="default" podName="busybox"
	Nov 19 22:20:25 old-k8s-version-975700 kubelet[1560]: I1119 22:20:25.990052    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87p55\" (UniqueName: \"kubernetes.io/projected/b49caea0-80e8-4473-ac1f-f9bd327c3754-kube-api-access-87p55\") pod \"busybox\" (UID: \"b49caea0-80e8-4473-ac1f-f9bd327c3754\") " pod="default/busybox"
	Nov 19 22:20:28 old-k8s-version-975700 kubelet[1560]: I1119 22:20:28.806269    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.691001227 podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="2025-11-19 22:20:26.263867005 +0000 UTC m=+31.676048399" lastFinishedPulling="2025-11-19 22:20:28.379090043 +0000 UTC m=+33.791271442" observedRunningTime="2025-11-19 22:20:28.805872451 +0000 UTC m=+34.218053858" watchObservedRunningTime="2025-11-19 22:20:28.80622427 +0000 UTC m=+34.218405676"
	
	
	==> storage-provisioner [537c778c87f9d8c20894001938b5632c0e5dcc6b1095fb4d266fd4b3995811b2] <==
	I1119 22:20:22.762742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:20:22.772216       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:20:22.772484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:20:22.782676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:20:22.782729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"750e6d2d-dbb6-45a4-b78a-de5bffe0f948", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca became leader
	I1119 22:20:22.782814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca!
	I1119 22:20:22.883137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975700 -n old-k8s-version-975700
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-975700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-975700
helpers_test.go:243: (dbg) docker inspect old-k8s-version-975700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca",
	        "Created": "2025-11-19T22:19:38.284388499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:19:38.321569291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/hosts",
	        "LogPath": "/var/lib/docker/containers/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca/fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca-json.log",
	        "Name": "/old-k8s-version-975700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-975700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-975700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa1d8405226b204ac72daac6f171881e88b0344b7533643e7e2243a0246fe4ca",
	                "LowerDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82f9fc885f3a15658949bf3138691f10889fccea52145002efd1a4a56c392ddc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-975700",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-975700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-975700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-975700",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-975700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bdcc92270fe5f34f2b3211c596bcb03676f7d021d1ab19d1405cbc9ff65513fb",
	            "SandboxKey": "/var/run/docker/netns/bdcc92270fe5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-975700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e025fa4e3e969ab94188de7ccce8cf41b046fa1de9b7b2485f5bcca1daedd849",
	                    "EndpointID": "8cbfdb5bbf934780f84e734118116ddf815c2fea44670767c9e66317e265e4f4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:6b:48:9f:07:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-975700",
	                        "fa1d8405226b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975700 -n old-k8s-version-975700
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-975700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-975700 logs -n 25: (1.017876639s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cilium-904997 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo containerd config dump                                                                                                                                                                                                        │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo crio config                                                                                                                                                                                                                   │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ delete  │ -p cilium-904997                                                                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:18 UTC │
	│ start   │ -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ ssh     │ force-systemd-flag-635885 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p force-systemd-flag-635885                                                                                                                                                                                                                        │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ stop    │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ delete  │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ ssh     │ cert-options-071115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cert-options-071115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p cert-options-071115                                                                                                                                                                                                                              │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439         │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:19:48
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:19:48.990275  248121 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:19:48.990406  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990419  248121 out.go:374] Setting ErrFile to fd 2...
	I1119 22:19:48.990423  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990627  248121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:19:48.991193  248121 out.go:368] Setting JSON to false
	I1119 22:19:48.992321  248121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3729,"bootTime":1763587060,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:19:48.992426  248121 start.go:143] virtualization: kvm guest
	I1119 22:19:48.994475  248121 out.go:179] * [no-preload-638439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:19:48.995854  248121 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:19:48.995867  248121 notify.go:221] Checking for updates...
	I1119 22:19:48.998724  248121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:19:49.000141  248121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:19:49.004556  248121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:19:49.005782  248121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:19:49.006906  248121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:19:49.008438  248121 config.go:182] Loaded profile config "cert-expiration-207460": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008559  248121 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008672  248121 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:19:49.008773  248121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:19:49.032838  248121 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:19:49.032973  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.090138  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.078907682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.090254  248121 docker.go:319] overlay module found
	I1119 22:19:49.091878  248121 out.go:179] * Using the docker driver based on user configuration
	I1119 22:19:49.093038  248121 start.go:309] selected driver: docker
	I1119 22:19:49.093053  248121 start.go:930] validating driver "docker" against <nil>
	I1119 22:19:49.093064  248121 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:19:49.093625  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.156775  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.145211302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.157058  248121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:19:49.157439  248121 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:19:49.159270  248121 out.go:179] * Using Docker driver with root privileges
	I1119 22:19:49.160689  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:19:49.160762  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:49.160776  248121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:19:49.160859  248121 start.go:353] cluster config:
	{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:19:49.162538  248121 out.go:179] * Starting "no-preload-638439" primary control-plane node in "no-preload-638439" cluster
	I1119 22:19:49.165506  248121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:19:49.166733  248121 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:19:49.168220  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:49.168286  248121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:19:49.168353  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:49.168395  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json: {Name:mk80aa81bbdb1209c6edea855d376fb83f4d3158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:19:49.168457  248121 cache.go:107] acquiring lock: {Name:mk3047e241e868539f7fa71732db2494bd5accac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168492  248121 cache.go:107] acquiring lock: {Name:mkfa0cff605af699ff39a13e0c5b50d01194658e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168527  248121 cache.go:107] acquiring lock: {Name:mk97f6c43b208e1a8e4ae123374c490c517b3f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168548  248121 cache.go:115] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:19:49.168561  248121 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.881µs
	I1119 22:19:49.168577  248121 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:19:49.168586  248121 cache.go:107] acquiring lock: {Name:mk95307f4a2dfa9e7a1dbc92b6b01bf8659d9b13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168623  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.168652  248121 cache.go:107] acquiring lock: {Name:mk07d9df97c614ffb0affecc21609079d8bc04b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168677  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.168687  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.168749  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.169004  248121 cache.go:107] acquiring lock: {Name:mk5d2dd3f2b18e53fa90921f4c0c75406a912168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.169610  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.169116  248121 cache.go:107] acquiring lock: {Name:mkabd0eddb0cd66931eabcbabac2ddbe82464607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170495  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.169136  248121 cache.go:107] acquiring lock: {Name:mkc18e74e5d25fdb795ed308cf7ce3da142a9be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170703  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.171552  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.171558  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.171569  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.171576  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.172459  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.172478  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.172507  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.200114  248121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:19:49.200187  248121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:19:49.200226  248121 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:19:49.200265  248121 start.go:360] acquireMachinesLock for no-preload-638439: {Name:mk6b4dc7fd24c69d9288f594d61933b094ed5442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.200436  248121 start.go:364] duration metric: took 142.192µs to acquireMachinesLock for "no-preload-638439"
	I1119 22:19:49.200608  248121 start.go:93] Provisioning new machine with config: &{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:19:49.200727  248121 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:19:46.119049  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:46.119476  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:46.119522  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:46.119566  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:46.151572  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.151601  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.151607  216336 cri.go:89] found id: ""
	I1119 22:19:46.151617  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:46.151687  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.155958  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.160473  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:46.160530  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:46.191589  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.191612  216336 cri.go:89] found id: ""
	I1119 22:19:46.191619  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:46.191670  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.196383  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:46.196437  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:46.225509  216336 cri.go:89] found id: ""
	I1119 22:19:46.225529  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.225540  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:46.225546  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:46.225599  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:46.254866  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.254913  216336 cri.go:89] found id: ""
	I1119 22:19:46.254924  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:46.254979  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.259701  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:46.259765  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:46.292564  216336 cri.go:89] found id: ""
	I1119 22:19:46.292591  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.292601  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:46.292608  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:46.292667  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:46.329564  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.329596  216336 cri.go:89] found id: ""
	I1119 22:19:46.329606  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:46.329667  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.335222  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:46.335276  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:46.367004  216336 cri.go:89] found id: ""
	I1119 22:19:46.367028  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.367039  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:46.367047  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:46.367105  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:46.399927  216336 cri.go:89] found id: ""
	I1119 22:19:46.399974  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.399984  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:46.400002  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:46.400017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:46.463044  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:46.463068  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:46.463083  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.497691  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:46.497718  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.535424  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:46.535455  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.575124  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:46.575154  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:46.607742  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:46.607769  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:46.710299  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:46.710332  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:46.724051  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:46.724080  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.762457  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:46.762489  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.803568  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:46.803601  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:49.354660  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:49.355043  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:49.355109  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:49.355169  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:49.395681  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:49.395705  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.395709  216336 cri.go:89] found id: ""
	I1119 22:19:49.395716  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:49.395781  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.403424  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.410799  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:49.410949  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:49.452918  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:49.452941  216336 cri.go:89] found id: ""
	I1119 22:19:49.452952  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:49.453011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.458252  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:49.458323  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:49.497813  216336 cri.go:89] found id: ""
	I1119 22:19:49.497837  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.497855  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:49.497863  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:49.497929  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:49.533334  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:49.533350  216336 cri.go:89] found id: ""
	I1119 22:19:49.533357  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:49.533399  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.537784  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:49.537858  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:49.568018  216336 cri.go:89] found id: ""
	I1119 22:19:49.568044  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.568056  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:49.568063  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:49.568119  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:49.609525  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:49.609556  216336 cri.go:89] found id: ""
	I1119 22:19:49.609566  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:49.609626  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.616140  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:49.616211  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:49.655231  216336 cri.go:89] found id: ""
	I1119 22:19:49.655262  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.655272  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:49.655279  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:49.655333  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:49.689095  216336 cri.go:89] found id: ""
	I1119 22:19:49.689153  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.689165  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:49.689184  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:49.689221  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:49.810665  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:49.810701  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:49.901949  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:49.901999  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:49.902017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.959095  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:49.959128  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:50.003553  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:50.003592  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:50.058586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:50.058623  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:50.074307  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:50.074340  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:50.111045  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:50.111081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:50.150599  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:50.150632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:50.185189  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:50.185216  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:48.204748  244005 out.go:252]   - Booting up control plane ...
	I1119 22:19:48.204897  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:19:48.205005  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:19:48.206240  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:19:48.231808  244005 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:19:48.232853  244005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:19:48.232929  244005 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:19:48.338373  244005 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:19:49.203330  248121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:19:49.203668  248121 start.go:159] libmachine.API.Create for "no-preload-638439" (driver="docker")
	I1119 22:19:49.203755  248121 client.go:173] LocalClient.Create starting
	I1119 22:19:49.203905  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem
	I1119 22:19:49.203977  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204016  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204103  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem
	I1119 22:19:49.204159  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204190  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204684  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:19:49.233073  248121 cli_runner.go:211] docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:19:49.233150  248121 network_create.go:284] running [docker network inspect no-preload-638439] to gather additional debugging logs...
	I1119 22:19:49.233181  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439
	W1119 22:19:49.260692  248121 cli_runner.go:211] docker network inspect no-preload-638439 returned with exit code 1
	I1119 22:19:49.260724  248121 network_create.go:287] error running [docker network inspect no-preload-638439]: docker network inspect no-preload-638439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-638439 not found
	I1119 22:19:49.260740  248121 network_create.go:289] output of [docker network inspect no-preload-638439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-638439 not found
	
	** /stderr **
	I1119 22:19:49.260835  248121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:49.281699  248121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
	I1119 22:19:49.282496  248121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-474134d72c89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:14:41:ce:21:e4} reservation:<nil>}
	I1119 22:19:49.283428  248121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-527206f47d61 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:ef:fd:4c:e4:1b} reservation:<nil>}
	I1119 22:19:49.284394  248121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac16fd64007f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:dc:21:09:78:e5} reservation:<nil>}
	I1119 22:19:49.285073  248121 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11547e9c7cf3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:49:21:10:91:74} reservation:<nil>}
	I1119 22:19:49.286118  248121 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e025fa4e3e96 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:19:71:ce:4a:3c} reservation:<nil>}
	I1119 22:19:49.287275  248121 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e92190}
	I1119 22:19:49.287353  248121 network_create.go:124] attempt to create docker network no-preload-638439 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1119 22:19:49.287448  248121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-638439 no-preload-638439
	I1119 22:19:49.349621  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:49.349748  248121 network_create.go:108] docker network no-preload-638439 192.168.103.0/24 created
	I1119 22:19:49.349780  248121 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-638439" container
	I1119 22:19:49.349859  248121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:19:49.350149  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:49.361305  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:49.363150  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:49.375619  248121 cli_runner.go:164] Run: docker volume create no-preload-638439 --label name.minikube.sigs.k8s.io=no-preload-638439 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:19:49.389385  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:49.396358  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:49.402036  248121 oci.go:103] Successfully created a docker volume no-preload-638439
	I1119 22:19:49.402119  248121 cli_runner.go:164] Run: docker run --rm --name no-preload-638439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --entrypoint /usr/bin/test -v no-preload-638439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:19:49.404338  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:49.471774  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:19:49.471808  248121 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 303.216742ms
	I1119 22:19:49.471832  248121 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:19:49.854076  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:19:49.854102  248121 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 685.635122ms
	I1119 22:19:49.854114  248121 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:19:49.969965  248121 oci.go:107] Successfully prepared a docker volume no-preload-638439
	I1119 22:19:49.970027  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1119 22:19:49.970211  248121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:19:49.970251  248121 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:19:49.970298  248121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:19:50.046746  248121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-638439 --name no-preload-638439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-638439 --network no-preload-638439 --ip 192.168.103.2 --volume no-preload-638439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:19:50.374513  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Running}}
	I1119 22:19:50.397354  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.420153  248121 cli_runner.go:164] Run: docker exec no-preload-638439 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:19:50.480826  248121 oci.go:144] the created container "no-preload-638439" has a running status.
	I1119 22:19:50.480855  248121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa...
	I1119 22:19:50.741014  248121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:19:50.777653  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.805773  248121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:19:50.805802  248121 kic_runner.go:114] Args: [docker exec --privileged no-preload-638439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:19:50.864742  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.878812  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:19:50.878846  248121 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.709887948s
	I1119 22:19:50.878866  248121 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:19:50.883024  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:19:50.883052  248121 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.714530905s
	I1119 22:19:50.883067  248121 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:19:50.889090  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:19:50.889119  248121 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.72053761s
	I1119 22:19:50.889134  248121 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:19:50.890545  248121 machine.go:94] provisionDockerMachine start ...
	I1119 22:19:50.890654  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:50.917029  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:50.917372  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:50.917394  248121 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:19:50.918143  248121 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41082->127.0.0.1:33063: read: connection reset by peer
	I1119 22:19:50.954753  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:19:50.954786  248121 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.785730546s
	I1119 22:19:50.954801  248121 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:19:51.295575  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:19:51.295602  248121 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.126530323s
	I1119 22:19:51.295614  248121 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:19:51.295629  248121 cache.go:87] Successfully saved all images to host disk.
	I1119 22:19:53.340728  244005 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002509 seconds
	I1119 22:19:53.340920  244005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:19:53.353852  244005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:19:53.877436  244005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:19:53.877630  244005 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-975700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:19:54.388156  244005 kubeadm.go:319] [bootstrap-token] Using token: cb0uuv.ole7whobrm4tnmeu
	I1119 22:19:54.389814  244005 out.go:252]   - Configuring RBAC rules ...
	I1119 22:19:54.389996  244005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:19:54.396226  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:19:54.404040  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:19:54.407336  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:19:54.410095  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:19:54.412761  244005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:19:54.424912  244005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:19:54.627091  244005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:19:54.803149  244005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:19:54.807538  244005 kubeadm.go:319] 
	I1119 22:19:54.807624  244005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:19:54.807631  244005 kubeadm.go:319] 
	I1119 22:19:54.807719  244005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:19:54.807724  244005 kubeadm.go:319] 
	I1119 22:19:54.807753  244005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:19:54.807821  244005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:19:54.807898  244005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:19:54.807905  244005 kubeadm.go:319] 
	I1119 22:19:54.807968  244005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:19:54.807973  244005 kubeadm.go:319] 
	I1119 22:19:54.808037  244005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:19:54.808042  244005 kubeadm.go:319] 
	I1119 22:19:54.808105  244005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:19:54.808197  244005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:19:54.808278  244005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:19:54.808283  244005 kubeadm.go:319] 
	I1119 22:19:54.808378  244005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:19:54.808482  244005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:19:54.808488  244005 kubeadm.go:319] 
	I1119 22:19:54.808581  244005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.808697  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:19:54.808745  244005 kubeadm.go:319] 	--control-plane 
	I1119 22:19:54.808753  244005 kubeadm.go:319] 
	I1119 22:19:54.808860  244005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:19:54.808867  244005 kubeadm.go:319] 
	I1119 22:19:54.808978  244005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.809119  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:19:54.812703  244005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:19:54.812825  244005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:19:54.812852  244005 cni.go:84] Creating CNI manager for ""
	I1119 22:19:54.812906  244005 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:54.814910  244005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:19:52.733247  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:52.733770  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:52.733821  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:52.733900  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:52.766790  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:52.766819  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:52.766824  216336 cri.go:89] found id: ""
	I1119 22:19:52.766834  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:52.766917  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.771725  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.776283  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:52.776357  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:52.808152  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:52.808179  216336 cri.go:89] found id: ""
	I1119 22:19:52.808190  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:52.808260  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.812851  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:52.812954  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:52.844459  216336 cri.go:89] found id: ""
	I1119 22:19:52.844483  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.844492  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:52.844499  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:52.844560  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:52.875911  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:52.875939  216336 cri.go:89] found id: ""
	I1119 22:19:52.875948  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:52.876008  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.880449  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:52.880526  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:52.913101  216336 cri.go:89] found id: ""
	I1119 22:19:52.913139  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.913150  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:52.913158  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:52.913240  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:52.945143  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:52.945172  216336 cri.go:89] found id: ""
	I1119 22:19:52.945182  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:52.945240  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.949921  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:52.950006  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:52.984180  216336 cri.go:89] found id: ""
	I1119 22:19:52.984214  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.984225  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:52.984233  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:52.984296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:53.016636  216336 cri.go:89] found id: ""
	I1119 22:19:53.016661  216336 logs.go:282] 0 containers: []
	W1119 22:19:53.016671  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:53.016691  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:53.016707  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:53.053700  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:53.053730  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:53.088889  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:53.088922  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:53.104350  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:53.104378  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:53.165418  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:53.165442  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:53.165460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:53.197214  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:53.197252  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:53.228109  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:53.228145  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:53.261694  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:53.261727  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:53.302850  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:53.302891  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:53.333442  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:53.333466  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:54.046074  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.046106  248121 ubuntu.go:182] provisioning hostname "no-preload-638439"
	I1119 22:19:54.046172  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.065777  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.066044  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.066060  248121 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-638439 && echo "no-preload-638439" | sudo tee /etc/hostname
	I1119 22:19:54.204707  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.204779  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.223401  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.223669  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.223696  248121 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-638439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-638439/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-638439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:19:54.352178  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:19:54.352206  248121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:19:54.352222  248121 ubuntu.go:190] setting up certificates
	I1119 22:19:54.352230  248121 provision.go:84] configureAuth start
	I1119 22:19:54.352301  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.371286  248121 provision.go:143] copyHostCerts
	I1119 22:19:54.371354  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:19:54.371370  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:19:54.371451  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:19:54.371570  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:19:54.371582  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:19:54.371623  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:19:54.371701  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:19:54.371710  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:19:54.371748  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:19:54.371818  248121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.no-preload-638439 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-638439]
	I1119 22:19:54.471021  248121 provision.go:177] copyRemoteCerts
	I1119 22:19:54.471092  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:19:54.471126  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.492235  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.594331  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:19:54.619378  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:19:54.640347  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:19:54.663269  248121 provision.go:87] duration metric: took 311.007703ms to configureAuth
	I1119 22:19:54.663306  248121 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:19:54.663514  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:54.663528  248121 machine.go:97] duration metric: took 3.772952055s to provisionDockerMachine
	I1119 22:19:54.663538  248121 client.go:176] duration metric: took 5.459757711s to LocalClient.Create
	I1119 22:19:54.663558  248121 start.go:167] duration metric: took 5.459889493s to libmachine.API.Create "no-preload-638439"
	I1119 22:19:54.663572  248121 start.go:293] postStartSetup for "no-preload-638439" (driver="docker")
	I1119 22:19:54.663584  248121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:19:54.663643  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:19:54.663702  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.693309  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.794533  248121 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:19:54.799614  248121 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:19:54.799652  248121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:19:54.799667  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:19:54.799750  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:19:54.799853  248121 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:19:54.800010  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:19:54.811703  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:19:54.833815  248121 start.go:296] duration metric: took 170.228401ms for postStartSetup
	I1119 22:19:54.834269  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.855648  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:54.855997  248121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:54.856065  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.875839  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.971298  248121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:19:54.976558  248121 start.go:128] duration metric: took 5.775804384s to createHost
	I1119 22:19:54.976584  248121 start.go:83] releasing machines lock for "no-preload-638439", held for 5.775996243s
	I1119 22:19:54.976652  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.996323  248121 ssh_runner.go:195] Run: cat /version.json
	I1119 22:19:54.996379  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.996397  248121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:19:54.996468  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:55.015498  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.015796  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.110222  248121 ssh_runner.go:195] Run: systemctl --version
	I1119 22:19:55.167157  248121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:19:55.172373  248121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:19:55.172445  248121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:19:55.200823  248121 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:19:55.200849  248121 start.go:496] detecting cgroup driver to use...
	I1119 22:19:55.200917  248121 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:19:55.200971  248121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:19:55.216429  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:19:55.230198  248121 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:19:55.230259  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:19:55.247760  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:19:55.266193  248121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:19:55.355176  248121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:19:55.456550  248121 docker.go:234] disabling docker service ...
	I1119 22:19:55.456609  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:19:55.479653  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:19:55.493533  248121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:19:55.592560  248121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:19:55.702080  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:19:55.719351  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:19:55.735307  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:19:55.748222  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:19:55.759552  248121 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:19:55.759604  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:19:55.771633  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.782179  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:19:55.791940  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.801486  248121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:19:55.810671  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:19:55.820637  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:19:55.830057  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:19:55.839605  248121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:19:55.847930  248121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:19:55.856300  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:19:55.943868  248121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:19:56.031481  248121 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:19:56.031555  248121 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:19:56.036560  248121 start.go:564] Will wait 60s for crictl version
	I1119 22:19:56.036619  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.040772  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:19:56.068661  248121 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:19:56.068728  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.092486  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.118002  248121 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:19:54.816277  244005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:19:54.820558  244005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:19:54.820581  244005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:19:54.833857  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:19:55.525202  244005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:19:55.525370  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:55.525485  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-975700 minikube.k8s.io/updated_at=2025_11_19T22_19_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-975700 minikube.k8s.io/primary=true
	I1119 22:19:55.543472  244005 ops.go:34] apiserver oom_adj: -16
	I1119 22:19:55.632765  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.133706  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.632860  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:57.133046  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.119594  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:56.139074  248121 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:19:56.143662  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:19:56.156640  248121 kubeadm.go:884] updating cluster {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:19:56.156774  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:56.156835  248121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:19:56.185228  248121 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:19:56.185258  248121 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 22:19:56.185326  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.185359  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.185391  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.185403  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.185415  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.185453  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.185334  248121 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.185400  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.186856  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.186874  248121 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.187070  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.187094  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.187129  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.187150  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.332716  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1119 22:19:56.332783  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.332809  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1119 22:19:56.332864  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.335699  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1119 22:19:56.335755  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.343400  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1119 22:19:56.343484  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.354423  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1119 22:19:56.354489  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.357606  248121 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1119 22:19:56.357630  248121 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1119 22:19:56.357659  248121 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.357662  248121 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.359708  248121 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1119 22:19:56.359750  248121 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.359792  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.365141  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1119 22:19:56.365211  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1119 22:19:56.370262  248121 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1119 22:19:56.370317  248121 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.370368  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.380909  248121 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1119 22:19:56.380976  248121 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.381006  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.381021  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.381050  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.381079  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.387736  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1119 22:19:56.387826  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.388049  248121 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1119 22:19:56.388093  248121 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.388134  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.388139  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.388097  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.419491  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.419632  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.422653  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.424802  248121 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1119 22:19:56.424851  248121 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.424918  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.426559  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.426657  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.426745  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.457323  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.459754  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.459823  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.459928  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.464385  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.464524  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.464526  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.499739  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:56.499837  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:56.504038  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:56.504120  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:19:56.504047  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.504087  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:56.504256  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:56.507722  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:56.507817  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:19:56.507959  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.508035  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 22:19:56.508064  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1119 22:19:56.508205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:56.508348  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.515236  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 22:19:56.515270  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1119 22:19:56.555985  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 22:19:56.556025  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1119 22:19:56.556078  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.556101  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 22:19:56.556122  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1119 22:19:56.571156  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 22:19:56.571205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:56.571220  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1119 22:19:56.571322  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.646952  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:56.646960  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 22:19:56.646995  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1119 22:19:56.647066  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:19:56.713984  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 22:19:56.714047  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1119 22:19:56.738791  248121 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.738923  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.888282  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1119 22:19:56.888324  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.888394  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:57.461211  248121 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1119 22:19:57.461286  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982686  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.094253154s)
	I1119 22:19:57.982716  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 22:19:57.982712  248121 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1119 22:19:57.982738  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982764  248121 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982789  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982801  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:58.943228  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 22:19:58.943276  248121 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943321  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943326  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:55.919868  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:55.920354  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:55.920400  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:55.920445  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:55.949031  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:55.949059  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:55.949065  216336 cri.go:89] found id: ""
	I1119 22:19:55.949074  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:55.949133  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.953108  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.957378  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:55.957442  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:55.987066  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:55.987094  216336 cri.go:89] found id: ""
	I1119 22:19:55.987104  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:55.987165  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.991215  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:55.991296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:56.020982  216336 cri.go:89] found id: ""
	I1119 22:19:56.021011  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.021022  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:56.021031  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:56.021093  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:56.051114  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.051138  216336 cri.go:89] found id: ""
	I1119 22:19:56.051147  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:56.051210  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.056071  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:56.056142  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:56.085375  216336 cri.go:89] found id: ""
	I1119 22:19:56.085398  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.085405  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:56.085414  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:56.085457  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:56.114914  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.114941  216336 cri.go:89] found id: ""
	I1119 22:19:56.114951  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:56.115011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.119718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:56.119785  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:56.148992  216336 cri.go:89] found id: ""
	I1119 22:19:56.149019  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.149029  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:56.149037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:56.149096  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:56.179135  216336 cri.go:89] found id: ""
	I1119 22:19:56.179163  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.179173  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:56.179190  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:56.179204  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:56.216379  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:56.216409  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.252073  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:56.252103  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:56.283542  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:56.283567  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:56.381327  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:56.381359  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:56.399981  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:56.400019  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:56.493857  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:56.493894  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:56.493913  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.537441  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:56.537473  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:56.590041  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:56.590076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:56.633876  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:56.633925  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.179328  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:59.179856  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:59.179947  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:59.180012  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:59.213304  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.213329  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.213336  216336 cri.go:89] found id: ""
	I1119 22:19:59.213346  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:59.213410  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.218953  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.223649  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:59.223722  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:59.256070  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.256133  216336 cri.go:89] found id: ""
	I1119 22:19:59.256144  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:59.256211  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.261436  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:59.261514  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:59.294827  216336 cri.go:89] found id: ""
	I1119 22:19:59.294854  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.294864  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:59.294871  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:59.294944  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:59.328052  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.328078  216336 cri.go:89] found id: ""
	I1119 22:19:59.328087  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:59.328148  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.333661  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:59.333745  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:59.367498  216336 cri.go:89] found id: ""
	I1119 22:19:59.367525  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.367534  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:59.367543  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:59.367601  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:59.401843  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.401868  216336 cri.go:89] found id: ""
	I1119 22:19:59.401877  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:59.401982  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.406399  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:59.406473  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:59.437867  216336 cri.go:89] found id: ""
	I1119 22:19:59.437948  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.437957  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:59.437963  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:59.438041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:59.465826  216336 cri.go:89] found id: ""
	I1119 22:19:59.465856  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.465866  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:59.465905  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:59.465953  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:59.498633  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:59.498670  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:59.586643  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:59.586677  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:59.602123  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:59.602148  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:59.668657  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:59.668675  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:59.668702  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.705026  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:59.705060  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.741520  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:59.741550  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.780920  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:59.780952  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.819532  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:59.819572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.861394  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:59.861428  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:57.633270  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.133177  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.633156  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.133958  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.632816  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.133904  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.633510  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.132810  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.632963  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:02.132866  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.209856  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.266503638s)
	I1119 22:20:00.209924  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 22:20:00.209943  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.266589504s)
	I1119 22:20:00.209953  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:00.210022  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:00.210039  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:01.315659  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.105588091s)
	I1119 22:20:01.315688  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 22:20:01.315709  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315726  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.105675845s)
	I1119 22:20:01.315757  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315796  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:02.564406  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.248612967s)
	I1119 22:20:02.564435  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 22:20:02.564452  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.248631025s)
	I1119 22:20:02.564470  248121 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564502  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 22:20:02.564519  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564590  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:02.568829  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 22:20:02.568862  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 22:20:02.417703  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:02.418103  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:02.418159  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:02.418203  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:02.450244  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.450266  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.450271  216336 cri.go:89] found id: ""
	I1119 22:20:02.450280  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:02.450336  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.455477  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.460188  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:02.460263  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:02.491317  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.491341  216336 cri.go:89] found id: ""
	I1119 22:20:02.491351  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:02.491409  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.495754  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:02.495837  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:02.526395  216336 cri.go:89] found id: ""
	I1119 22:20:02.526421  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.526433  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:02.526441  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:02.526509  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:02.556596  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:02.556619  216336 cri.go:89] found id: ""
	I1119 22:20:02.556629  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:02.556686  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.561029  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:02.561102  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:02.593442  216336 cri.go:89] found id: ""
	I1119 22:20:02.593468  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.593480  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:02.593488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:02.593547  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:02.626155  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.626181  216336 cri.go:89] found id: ""
	I1119 22:20:02.626191  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:02.626239  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.630831  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:02.630910  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:02.663060  216336 cri.go:89] found id: ""
	I1119 22:20:02.663088  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.663098  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:02.663106  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:02.663159  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:02.692104  216336 cri.go:89] found id: ""
	I1119 22:20:02.692132  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.692142  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:02.692159  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:02.692172  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.730157  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:02.730198  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:02.764408  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:02.764435  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:02.871409  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:02.871460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.912737  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:02.912778  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.958177  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:02.958229  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:03.003908  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:03.003950  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:03.062041  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:03.062076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:03.080938  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:03.080972  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:03.153154  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:03.153177  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:03.153191  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.633509  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.132907  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.633598  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.133836  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.632911  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.133740  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.633397  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.133422  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.633053  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.133122  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.632971  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.709877  244005 kubeadm.go:1114] duration metric: took 12.184544724s to wait for elevateKubeSystemPrivileges
	I1119 22:20:07.709929  244005 kubeadm.go:403] duration metric: took 23.328681682s to StartCluster
	I1119 22:20:07.709949  244005 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.710024  244005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:07.711281  244005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.726769  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:07.726909  244005 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:07.727036  244005 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:20:07.727028  244005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:07.727107  244005 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727154  244005 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-975700"
	I1119 22:20:07.727201  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.727269  244005 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727331  244005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-975700"
	I1119 22:20:07.727652  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.727759  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.759624  244005 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:07.760449  244005 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-975700"
	I1119 22:20:07.760487  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.760848  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.781264  244005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.781292  244005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:07.781358  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.790624  244005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:07.790705  244005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.805293  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.811125  244005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:07.811152  244005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:07.811221  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.839037  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.927378  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.930474  244005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.930565  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:07.945012  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:08.325616  244005 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:08.326981  244005 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:08.525071  244005 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:20:05.409665  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.845117956s)
	I1119 22:20:05.409701  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:20:05.409742  248121 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.409813  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.827105  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:20:05.827149  248121 cache_images.go:125] Successfully loaded all cached images
	I1119 22:20:05.827155  248121 cache_images.go:94] duration metric: took 9.641883158s to LoadCachedImages
	I1119 22:20:05.827169  248121 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1119 22:20:05.827281  248121 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-638439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:20:05.827350  248121 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:20:05.854538  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:05.854565  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:05.854580  248121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:20:05.854605  248121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-638439 NodeName:no-preload-638439 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:20:05.854728  248121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-638439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:20:05.854794  248121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.863483  248121 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 22:20:05.863536  248121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.871942  248121 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 22:20:05.871968  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 22:20:05.871947  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 22:20:05.872035  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 22:20:05.876399  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 22:20:05.876433  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 22:20:07.043592  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:07.058665  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 22:20:07.063097  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 22:20:07.063136  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 22:20:07.259328  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 22:20:07.263904  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 22:20:07.263944  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 22:20:07.467537  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:20:07.476103  248121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:20:07.489039  248121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:20:07.504456  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1119 22:20:07.517675  248121 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:20:07.521966  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:20:07.532448  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.616669  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.647854  248121 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439 for IP: 192.168.103.2
	I1119 22:20:07.647911  248121 certs.go:195] generating shared ca certs ...
	I1119 22:20:07.647941  248121 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.648100  248121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:20:07.648156  248121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:20:07.648169  248121 certs.go:257] generating profile certs ...
	I1119 22:20:07.648233  248121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key
	I1119 22:20:07.648249  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt with IP's: []
	I1119 22:20:08.248835  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt ...
	I1119 22:20:08.248872  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: {Name:mk71551595bc691ff029aa4f22d8136d735c86c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249095  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key ...
	I1119 22:20:08.249107  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key: {Name:mk7714d393e738013c7abe0f1689bcf490e26b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249250  248121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff
	I1119 22:20:08.249267  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:20:09.018572  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff ...
	I1119 22:20:09.018603  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff: {Name:mk1a2db3ea3ff5c82c4c822f2131fbadbd39c724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018790  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff ...
	I1119 22:20:09.018808  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff: {Name:mk13f089d71bdc7abee8608285249f8ab5ad14b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018926  248121 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt
	I1119 22:20:09.019033  248121 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key
	I1119 22:20:09.019118  248121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key
	I1119 22:20:09.019145  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt with IP's: []
	I1119 22:20:09.141320  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt ...
	I1119 22:20:09.141353  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt: {Name:mke73db150d5fe88961c2b7ca7e43e6cb8c1e87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141532  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key ...
	I1119 22:20:09.141550  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key: {Name:mk65b56a4bcd9d60fdf62f046abf7a5abe0e729f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141750  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:20:09.141799  248121 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:20:09.141812  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:20:09.141845  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:20:09.141894  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:20:09.141928  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:20:09.141984  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:20:09.142554  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:20:09.161569  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:20:09.180990  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:20:09.199264  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:20:09.217135  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:20:09.236364  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:20:09.255084  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:20:09.274604  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:20:09.293451  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:20:09.315834  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:20:09.336567  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:20:09.354248  248121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:20:09.367868  248121 ssh_runner.go:195] Run: openssl version
	I1119 22:20:09.374260  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:20:09.383332  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387801  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387864  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.424342  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:20:09.433605  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:20:09.442478  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446634  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446694  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.481876  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:20:09.491181  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:20:09.499823  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.503986  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.504043  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.539481  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:20:09.548630  248121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:20:09.552649  248121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:20:09.552709  248121 kubeadm.go:401] StartCluster: {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:20:09.552800  248121 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:20:09.552841  248121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:20:09.580504  248121 cri.go:89] found id: ""
	I1119 22:20:09.580577  248121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:20:09.588825  248121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:20:09.597263  248121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:20:09.597312  248121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:20:09.605431  248121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:20:09.605448  248121 kubeadm.go:158] found existing configuration files:
	
	I1119 22:20:09.605505  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:20:09.613580  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:20:09.613647  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:20:09.621432  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:20:09.629381  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:20:09.629444  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:20:09.637498  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.645457  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:20:09.645500  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.653775  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:20:09.662581  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:20:09.662631  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:20:09.670267  248121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:20:09.705969  248121 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:20:09.706049  248121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:20:09.725461  248121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:20:09.725557  248121 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:20:09.725619  248121 kubeadm.go:319] OS: Linux
	I1119 22:20:09.725688  248121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:20:09.725759  248121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:20:09.725823  248121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:20:09.725926  248121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:20:09.726011  248121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:20:09.726090  248121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:20:09.726169  248121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:20:09.726247  248121 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:20:09.785631  248121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:20:09.785785  248121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:20:09.785930  248121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:20:09.790816  248121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:20:05.698391  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:08.526183  244005 addons.go:515] duration metric: took 799.151282ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:20:08.830648  244005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-975700" context rescaled to 1 replicas
	W1119 22:20:10.330548  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:12.330688  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:09.792948  248121 out.go:252]   - Generating certificates and keys ...
	I1119 22:20:09.793051  248121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:20:09.793149  248121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:20:10.738826  248121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:20:10.908170  248121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:20:11.291841  248121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:20:11.623960  248121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:20:11.828384  248121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:20:11.828565  248121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.233215  248121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:20:12.233354  248121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.358552  248121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:20:12.567027  248121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:20:12.649341  248121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:20:12.649468  248121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:20:12.821942  248121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:20:13.184331  248121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:20:13.249251  248121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:20:13.507036  248121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:20:13.992391  248121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:20:13.992949  248121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:20:14.073515  248121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:20:10.699588  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:20:10.699656  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:10.699719  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:10.736721  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:10.736747  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:10.736753  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:10.736758  216336 cri.go:89] found id: ""
	I1119 22:20:10.736767  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:10.736834  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.742155  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.747306  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.752281  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:10.752356  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:10.785664  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:10.785691  216336 cri.go:89] found id: ""
	I1119 22:20:10.785700  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:10.785758  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.791037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:10.791107  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:10.827690  216336 cri.go:89] found id: ""
	I1119 22:20:10.827736  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.827749  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:10.827781  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:10.827856  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:10.860463  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:10.860489  216336 cri.go:89] found id: ""
	I1119 22:20:10.860499  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:10.860557  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.865818  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:10.865902  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:10.896395  216336 cri.go:89] found id: ""
	I1119 22:20:10.896425  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.896457  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:10.896464  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:10.896524  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:10.927065  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:10.927091  216336 cri.go:89] found id: ""
	I1119 22:20:10.927100  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:10.927157  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.931718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:10.931789  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:10.960849  216336 cri.go:89] found id: ""
	I1119 22:20:10.960892  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.960903  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:10.960910  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:10.960962  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:10.993029  216336 cri.go:89] found id: ""
	I1119 22:20:10.993057  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.993067  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:10.993080  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:10.993094  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:11.027974  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:11.028010  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:11.062086  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:11.062120  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:11.103210  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:11.103250  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:11.145837  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:11.145872  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:11.199841  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:11.199937  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:11.236586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:11.236618  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:11.253432  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:11.253487  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:11.295903  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:11.295943  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:11.337708  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:11.337745  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:11.452249  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:11.452285  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:14.830008  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:16.830268  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:14.075591  248121 out.go:252]   - Booting up control plane ...
	I1119 22:20:14.075701  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:20:14.075795  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:20:14.076511  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:20:14.092600  248121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:20:14.092767  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:20:14.099651  248121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:20:14.099786  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:20:14.099865  248121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:20:14.205620  248121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:20:14.205784  248121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:20:14.707136  248121 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.67843ms
	I1119 22:20:14.711176  248121 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:20:14.711406  248121 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:20:14.711556  248121 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:20:14.711669  248121 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:20:16.370429  248121 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.659105526s
	I1119 22:20:16.919263  248121 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.208262146s
	I1119 22:20:18.712413  248121 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001122323s
	I1119 22:20:18.724319  248121 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:20:18.734195  248121 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:20:18.743489  248121 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:20:18.743707  248121 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-638439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:20:18.749843  248121 kubeadm.go:319] [bootstrap-token] Using token: tkvbyg.4blpqvlc8c0koqab
	I1119 22:20:18.751541  248121 out.go:252]   - Configuring RBAC rules ...
	I1119 22:20:18.751647  248121 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:20:18.754347  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:20:18.760461  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:20:18.763019  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:20:18.765434  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:20:18.768021  248121 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:20:19.119568  248121 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:20:19.537037  248121 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:20:20.119469  248121 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:20:20.120399  248121 kubeadm.go:319] 
	I1119 22:20:20.120467  248121 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:20:20.120472  248121 kubeadm.go:319] 
	I1119 22:20:20.120605  248121 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:20:20.120632  248121 kubeadm.go:319] 
	I1119 22:20:20.120661  248121 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:20:20.120719  248121 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:20:20.120765  248121 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:20:20.120772  248121 kubeadm.go:319] 
	I1119 22:20:20.120845  248121 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:20:20.120857  248121 kubeadm.go:319] 
	I1119 22:20:20.121004  248121 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:20:20.121029  248121 kubeadm.go:319] 
	I1119 22:20:20.121103  248121 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:20:20.121207  248121 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:20:20.121271  248121 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:20:20.121297  248121 kubeadm.go:319] 
	I1119 22:20:20.121444  248121 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:20:20.121523  248121 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:20:20.121533  248121 kubeadm.go:319] 
	I1119 22:20:20.121611  248121 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.121712  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:20:20.121734  248121 kubeadm.go:319] 	--control-plane 
	I1119 22:20:20.121738  248121 kubeadm.go:319] 
	I1119 22:20:20.121810  248121 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:20:20.121816  248121 kubeadm.go:319] 
	I1119 22:20:20.121927  248121 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.122034  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:20:20.124555  248121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:20:20.124740  248121 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:20:20.124773  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:20.124786  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:20.127350  248121 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1119 22:20:19.330624  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:21.830427  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:22.330516  244005 node_ready.go:49] node "old-k8s-version-975700" is "Ready"
	I1119 22:20:22.330545  244005 node_ready.go:38] duration metric: took 14.003533581s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:22.330557  244005 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:20:22.330607  244005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:20:22.343206  244005 api_server.go:72] duration metric: took 14.6162161s to wait for apiserver process to appear ...
	I1119 22:20:22.343236  244005 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:20:22.343259  244005 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:20:22.347053  244005 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 22:20:22.348151  244005 api_server.go:141] control plane version: v1.28.0
	I1119 22:20:22.348175  244005 api_server.go:131] duration metric: took 4.933094ms to wait for apiserver health ...
	I1119 22:20:22.348183  244005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:20:22.351821  244005 system_pods.go:59] 8 kube-system pods found
	I1119 22:20:22.351849  244005 system_pods.go:61] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.351854  244005 system_pods.go:61] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.351860  244005 system_pods.go:61] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.351864  244005 system_pods.go:61] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.351869  244005 system_pods.go:61] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.351873  244005 system_pods.go:61] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.351877  244005 system_pods.go:61] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.351892  244005 system_pods.go:61] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.351898  244005 system_pods.go:74] duration metric: took 3.709193ms to wait for pod list to return data ...
	I1119 22:20:22.351906  244005 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:20:22.353863  244005 default_sa.go:45] found service account: "default"
	I1119 22:20:22.353906  244005 default_sa.go:55] duration metric: took 1.968518ms for default service account to be created ...
	I1119 22:20:22.353917  244005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:20:22.356763  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.356787  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.356792  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.356799  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.356803  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.356810  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.356813  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.356817  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.356822  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.356838  244005 retry.go:31] will retry after 295.130955ms: missing components: kube-dns
	I1119 22:20:20.128552  248121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:20:20.133893  248121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:20:20.133928  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:20:20.148247  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:20:20.366418  248121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:20:20.366472  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.366530  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-638439 minikube.k8s.io/updated_at=2025_11_19T22_20_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=no-preload-638439 minikube.k8s.io/primary=true
	I1119 22:20:20.472760  248121 ops.go:34] apiserver oom_adj: -16
	I1119 22:20:20.472956  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.973815  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.473583  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.973622  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.473704  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.973336  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.473849  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.973455  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.472997  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.537110  248121 kubeadm.go:1114] duration metric: took 4.170685845s to wait for elevateKubeSystemPrivileges
	I1119 22:20:24.537150  248121 kubeadm.go:403] duration metric: took 14.984446293s to StartCluster
	I1119 22:20:24.537173  248121 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.537243  248121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:24.539105  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.539319  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:24.539342  248121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:24.539397  248121 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:24.539519  248121 addons.go:70] Setting storage-provisioner=true in profile "no-preload-638439"
	I1119 22:20:24.539532  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:20:24.539540  248121 addons.go:239] Setting addon storage-provisioner=true in "no-preload-638439"
	I1119 22:20:24.539552  248121 addons.go:70] Setting default-storageclass=true in profile "no-preload-638439"
	I1119 22:20:24.539577  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.539588  248121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-638439"
	I1119 22:20:24.539936  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.540134  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.541288  248121 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:24.543039  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:24.564207  248121 addons.go:239] Setting addon default-storageclass=true in "no-preload-638439"
	I1119 22:20:24.564253  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.564597  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.564680  248121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:24.568527  248121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.568546  248121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:24.568596  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.597385  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.599498  248121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.599523  248121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:24.599582  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.624046  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.628608  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:24.684697  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:24.711970  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.742786  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.836401  248121 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:24.837864  248121 node_ready.go:35] waiting up to 6m0s for node "no-preload-638439" to be "Ready" ...
	I1119 22:20:25.026785  248121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:20:21.527976  216336 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.075664087s)
	W1119 22:20:21.528025  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1119 22:20:24.028516  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:22.657454  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.657490  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.657499  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.657508  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.657513  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.657520  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.657526  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.657534  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.657541  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.657562  244005 retry.go:31] will retry after 290.603952ms: missing components: kube-dns
	I1119 22:20:22.951933  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.951963  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.951969  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.951974  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.951978  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.951983  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.951988  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.951992  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.951996  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:22.952009  244005 retry.go:31] will retry after 460.674944ms: missing components: kube-dns
	I1119 22:20:23.417271  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.417302  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:23.417309  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.417314  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.417320  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.417326  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.417331  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.417336  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.417341  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.417365  244005 retry.go:31] will retry after 513.116078ms: missing components: kube-dns
	I1119 22:20:23.935257  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.935284  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Running
	I1119 22:20:23.935290  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.935294  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.935297  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.935301  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.935304  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.935308  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.935311  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.935318  244005 system_pods.go:126] duration metric: took 1.581396028s to wait for k8s-apps to be running ...
	I1119 22:20:23.935324  244005 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:20:23.935362  244005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:23.948529  244005 system_svc.go:56] duration metric: took 13.192475ms WaitForService to wait for kubelet
	I1119 22:20:23.948562  244005 kubeadm.go:587] duration metric: took 16.221575338s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:20:23.948584  244005 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:20:23.951344  244005 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:20:23.951368  244005 node_conditions.go:123] node cpu capacity is 8
	I1119 22:20:23.951381  244005 node_conditions.go:105] duration metric: took 2.792615ms to run NodePressure ...
	I1119 22:20:23.951394  244005 start.go:242] waiting for startup goroutines ...
	I1119 22:20:23.951400  244005 start.go:247] waiting for cluster config update ...
	I1119 22:20:23.951411  244005 start.go:256] writing updated cluster config ...
	I1119 22:20:23.951671  244005 ssh_runner.go:195] Run: rm -f paused
	I1119 22:20:23.955724  244005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:23.960403  244005 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.964724  244005 pod_ready.go:94] pod "coredns-5dd5756b68-8hdh7" is "Ready"
	I1119 22:20:23.964745  244005 pod_ready.go:86] duration metric: took 4.323941ms for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.969212  244005 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.977143  244005 pod_ready.go:94] pod "etcd-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.977172  244005 pod_ready.go:86] duration metric: took 7.932702ms for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.984279  244005 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.990403  244005 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.990436  244005 pod_ready.go:86] duration metric: took 6.116437ms for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.994759  244005 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.360199  244005 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-975700" is "Ready"
	I1119 22:20:24.360227  244005 pod_ready.go:86] duration metric: took 365.436099ms for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.562023  244005 pod_ready.go:83] waiting for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.960397  244005 pod_ready.go:94] pod "kube-proxy-rnxxf" is "Ready"
	I1119 22:20:24.960428  244005 pod_ready.go:86] duration metric: took 398.37739ms for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.161533  244005 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.560960  244005 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-975700" is "Ready"
	I1119 22:20:25.560992  244005 pod_ready.go:86] duration metric: took 399.43384ms for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.561003  244005 pod_ready.go:40] duration metric: took 1.605243985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:25.605359  244005 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:20:25.607589  244005 out.go:203] 
	W1119 22:20:25.608986  244005 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:20:25.610519  244005 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:20:25.612224  244005 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-975700" cluster and "default" namespace by default
	I1119 22:20:25.028260  248121 addons.go:515] duration metric: took 488.871855ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:20:25.340186  248121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-638439" context rescaled to 1 replicas
	W1119 22:20:26.840695  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:28.841182  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:26.041396  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42420->192.168.76.2:8443: read: connection reset by peer
	I1119 22:20:26.041468  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:26.041590  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:26.074121  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.074147  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.074156  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.074161  216336 cri.go:89] found id: ""
	I1119 22:20:26.074169  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:26.074227  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.080252  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.086170  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.090514  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:26.090588  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:26.119338  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.119365  216336 cri.go:89] found id: ""
	I1119 22:20:26.119375  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:26.119431  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.123237  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:26.123308  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:26.150429  216336 cri.go:89] found id: ""
	I1119 22:20:26.150465  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.150475  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:26.150488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:26.150553  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:26.180127  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.180150  216336 cri.go:89] found id: ""
	I1119 22:20:26.180167  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:26.180222  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.185074  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:26.185141  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:26.216334  216336 cri.go:89] found id: ""
	I1119 22:20:26.216362  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.216373  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:26.216381  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:26.216440  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:26.246928  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:26.246952  216336 cri.go:89] found id: ""
	I1119 22:20:26.246962  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:26.247027  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.252210  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:26.252281  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:26.283008  216336 cri.go:89] found id: ""
	I1119 22:20:26.283052  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.283086  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:26.283101  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:26.283160  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:26.311983  216336 cri.go:89] found id: ""
	I1119 22:20:26.312016  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.312026  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:26.312040  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:26.312059  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:26.372080  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:26.372108  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:26.372123  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.410125  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:26.410156  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.445052  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:26.445081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:26.488314  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:26.488348  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:26.519759  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:26.519786  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:26.607720  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:26.607753  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:26.622164  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:26.622196  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.658569  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:26.658598  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.690380  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:26.690410  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.723334  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:26.723368  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.254435  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:29.254927  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:29.254988  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:29.255050  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:29.281477  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.281503  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.281509  216336 cri.go:89] found id: ""
	I1119 22:20:29.281518  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:29.281576  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.285991  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.289786  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:29.289841  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:29.315177  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.315199  216336 cri.go:89] found id: ""
	I1119 22:20:29.315208  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:29.315264  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.319376  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:29.319444  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:29.346951  216336 cri.go:89] found id: ""
	I1119 22:20:29.346973  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.346980  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:29.346998  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:29.347043  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:29.374529  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.374549  216336 cri.go:89] found id: ""
	I1119 22:20:29.374556  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:29.374608  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.378833  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:29.378918  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:29.409418  216336 cri.go:89] found id: ""
	I1119 22:20:29.409456  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.409468  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:29.409476  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:29.409542  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:29.439747  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.439767  216336 cri.go:89] found id: ""
	I1119 22:20:29.439775  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:29.439832  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.443967  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:29.444041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:29.469669  216336 cri.go:89] found id: ""
	I1119 22:20:29.469695  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.469705  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:29.469712  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:29.469769  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:29.496972  216336 cri.go:89] found id: ""
	I1119 22:20:29.497000  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.497009  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:29.497026  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:29.497039  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:29.585833  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:29.585865  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:29.600450  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:29.600488  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.634599  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:29.634632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:29.694751  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:29.694785  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:29.694799  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.728982  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:29.729009  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.762543  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:29.762572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.794342  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:29.794374  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.828582  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:29.828610  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:29.874642  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:29.874672  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1119 22:20:31.341227  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:33.840869  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:32.406487  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:32.406952  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:32.407019  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:32.407075  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:32.436319  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:32.436348  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:32.436355  216336 cri.go:89] found id: ""
	I1119 22:20:32.436368  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:32.436424  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.440717  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.444717  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:32.444781  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:32.470631  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:32.470655  216336 cri.go:89] found id: ""
	I1119 22:20:32.470666  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:32.470725  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.474820  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:32.474893  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:32.504076  216336 cri.go:89] found id: ""
	I1119 22:20:32.504104  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.504115  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:32.504125  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:32.504185  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:32.533110  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:32.533135  216336 cri.go:89] found id: ""
	I1119 22:20:32.533143  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:32.533215  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.537455  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:32.537523  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:32.564625  216336 cri.go:89] found id: ""
	I1119 22:20:32.564647  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.564655  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:32.564661  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:32.564719  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:32.591414  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:32.591443  216336 cri.go:89] found id: ""
	I1119 22:20:32.591455  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:32.591535  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.595459  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:32.595529  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:32.621765  216336 cri.go:89] found id: ""
	I1119 22:20:32.621792  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.621801  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:32.621807  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:32.621862  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:32.647922  216336 cri.go:89] found id: ""
	I1119 22:20:32.647948  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.647958  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:32.647978  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:32.648005  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:32.680718  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:32.680745  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:32.726055  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:32.726088  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:32.757760  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:32.757794  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:32.848763  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:32.848797  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:32.862591  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:32.862631  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:32.922769  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:32.922788  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:32.922800  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:32.956142  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:32.956171  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:32.991968  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:32.992001  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:33.026022  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:33.026050  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d5768828ca04f       56cc512116c8f       9 seconds ago       Running             busybox                   0                   36bf64ba3c00d       busybox                                          default
	dcb27a5492378       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   6a75c4192812f       coredns-5dd5756b68-8hdh7                         kube-system
	537c778c87f9d       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   8fa22b8d20a3f       storage-provisioner                              kube-system
	9f637c51ffa43       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   cb55d544de2ea       kindnet-mlzfc                                    kube-system
	bfde9418adc9d       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   4ca7d14c5d50a       kube-proxy-rnxxf                                 kube-system
	814e6989c6431       f6f496300a2ae       48 seconds ago      Running             kube-scheduler            0                   f5ceb3a12bb84       kube-scheduler-old-k8s-version-975700            kube-system
	1870cf3b3c44b       bb5e0dde9054c       48 seconds ago      Running             kube-apiserver            0                   52831c15e2557       kube-apiserver-old-k8s-version-975700            kube-system
	97883579e01ac       73deb9a3f7025       48 seconds ago      Running             etcd                      0                   e63e84e034d31       etcd-old-k8s-version-975700                      kube-system
	f4532683638eb       4be79c38a4bab       48 seconds ago      Running             kube-controller-manager   0                   250cc7adfeba7       kube-controller-manager-old-k8s-version-975700   kube-system
	
	
	==> containerd <==
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.712366614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8hdh7,Uid:a4057bf2-fe2e-42db-83e9-bc625724c61c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.715553681Z" level=info msg="CreateContainer within sandbox \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.722344581Z" level=info msg="Container dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.728923728Z" level=info msg="CreateContainer within sandbox \"6a75c4192812faee0e855fcba490a6d63eeaa3e8229ace4b9a3a2b128e801116\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.729475146Z" level=info msg="StartContainer for \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\""
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.730499329Z" level=info msg="connecting to shim dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf" address="unix:///run/containerd/s/34a674b328f7f600d36cfd77d784cd14517a5b33bcc634daaca7b6dd09032aa9" protocol=ttrpc version=3
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.757547812Z" level=info msg="StartContainer for \"537c778c87f9d8c20894001938b5632c0e5dcc6b1095fb4d266fd4b3995811b2\" returns successfully"
	Nov 19 22:20:22 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:22.786711759Z" level=info msg="StartContainer for \"dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf\" returns successfully"
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.134603361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b49caea0-80e8-4473-ac1f-f9bd327c3754,Namespace:default,Attempt:0,}"
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.185916874Z" level=info msg="connecting to shim 36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0" address="unix:///run/containerd/s/c0d7613134ce7e47335ad17357d4a66a2ab52af6386e2abf7c0d2ac536b7f638" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.262497493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b49caea0-80e8-4473-ac1f-f9bd327c3754,Namespace:default,Attempt:0,} returns sandbox id \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\""
	Nov 19 22:20:26 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:26.264162086Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.373146514Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.374074587Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.375650212Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378263887Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378735365Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.114534001s"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.378776793Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.380562536Z" level=info msg="CreateContainer within sandbox \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.389100774Z" level=info msg="Container d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.397616150Z" level=info msg="CreateContainer within sandbox \"36bf64ba3c00d9e0c7f71f899e9cd21577248641d207dcfc98340d1d6b3cb0d0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.398260870Z" level=info msg="StartContainer for \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\""
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.399512803Z" level=info msg="connecting to shim d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a" address="unix:///run/containerd/s/c0d7613134ce7e47335ad17357d4a66a2ab52af6386e2abf7c0d2ac536b7f638" protocol=ttrpc version=3
	Nov 19 22:20:28 old-k8s-version-975700 containerd[666]: time="2025-11-19T22:20:28.458456492Z" level=info msg="StartContainer for \"d5768828ca04f9295bf18e3fc30308deb6547c5a50a2782f1e71634c15ae7e9a\" returns successfully"
	Nov 19 22:20:34 old-k8s-version-975700 containerd[666]: E1119 22:20:34.905114     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [dcb27a5492378c9249ef7c6af871ff41c7849ef2087b13036c4112f3826f90bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48436 - 61 "HINFO IN 2387730691433537035.6546186387081931462. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.161284203s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-975700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-975700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-975700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_19_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:19:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-975700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:20:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:19:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:20:25 +0000   Wed, 19 Nov 2025 22:20:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-975700
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                3fcee5dd-d370-4209-8cfb-b52e4110e73b
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-8hdh7                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-975700                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-mlzfc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-975700             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-old-k8s-version-975700    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-rnxxf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-975700             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-975700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-975700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-975700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-975700 event: Registered Node old-k8s-version-975700 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-975700 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [97883579e01acd8bc2695b07f55c948f3a46c160bf534f88de73606eaba10069] <==
	{"level":"info","ts":"2025-11-19T22:19:49.465492Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-19T22:19:49.465528Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-19T22:19:50.345522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-19T22:19:50.345588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.345607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-19T22:19:50.346237Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.346786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:19:50.346778Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-975700 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:19:50.346819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:19:50.34703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.347114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:19:50.347198Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:19:50.347172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.347229Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:19:50.34807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:19:50.348559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-11-19T22:19:52.006287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.664484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-pcqkfx5qiyeeley4bpw5zibjhu\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T22:19:52.0064Z","caller":"traceutil/trace.go:171","msg":"trace[898828708] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-pcqkfx5qiyeeley4bpw5zibjhu; range_end:; response_count:0; response_revision:69; }","duration":"208.799616ms","start":"2025-11-19T22:19:51.797579Z","end":"2025-11-19T22:19:52.006378Z","steps":["trace[898828708] 'range keys from in-memory index tree'  (duration: 208.571934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:20:07.925909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.040627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2025-11-19T22:20:07.925985Z","caller":"traceutil/trace.go:171","msg":"trace[1355111703] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"124.145953ms","start":"2025-11-19T22:20:07.801823Z","end":"2025-11-19T22:20:07.925969Z","steps":["trace[1355111703] 'range keys from in-memory index tree'  (duration: 123.893977ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:20:07.945114Z","caller":"traceutil/trace.go:171","msg":"trace[986567943] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"142.590181ms","start":"2025-11-19T22:20:07.802499Z","end":"2025-11-19T22:20:07.945089Z","steps":["trace[986567943] 'process raft request'  (duration: 142.419431ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:20:37 up  1:02,  0 user,  load average: 4.27, 3.36, 2.11
	Linux old-k8s-version-975700 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f637c51ffa434a826f6584d8a7faf4701e1f09be3a0f36a1d28e02a37c6fb8d] <==
	I1119 22:20:11.957590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:20:11.957822       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 22:20:11.958041       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:20:11.958058       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:20:11.958074       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:20:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:20:12.159373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:20:12.159514       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:20:12.159531       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:20:12.159716       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:20:12.538063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:20:12.538126       1 metrics.go:72] Registering metrics
	I1119 22:20:12.538374       1 controller.go:711] "Syncing nftables rules"
	I1119 22:20:22.164952       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:20:22.165012       1 main.go:301] handling current node
	I1119 22:20:32.161088       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:20:32.161124       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1870cf3b3c44ba81df1590d986f8a70efb48ac5a464f0a3d4d757b18fc420709] <==
	I1119 22:19:51.591405       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:19:51.591414       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:19:51.591407       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:19:51.591438       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:19:51.591387       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:19:51.593118       1 controller.go:624] quota admission added evaluator for: namespaces
	E1119 22:19:51.595601       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 22:19:51.608554       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:19:52.008399       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:19:52.497067       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:19:52.500707       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:19:52.500727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:19:52.938966       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:19:52.979169       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:19:53.101027       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:19:53.107157       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 22:19:53.108241       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:19:53.112503       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:19:53.552446       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:19:54.613121       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:19:54.625563       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:19:54.635960       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:20:06.459115       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:20:07.162080       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:07.162080       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f4532683638eb7620857fe45f4fd3c3ed09ef48600c71e8fb4fb0f9dae88bfb2] <==
	I1119 22:20:06.563934       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.565627       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.565755       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-975700" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:20:06.609574       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:20:06.927535       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:20:07.000472       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:20:07.000512       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:20:07.173283       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rnxxf"
	I1119 22:20:07.176815       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mlzfc"
	I1119 22:20:07.368445       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vbfhv"
	I1119 22:20:07.377915       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8hdh7"
	I1119 22:20:07.385341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="921.876981ms"
	I1119 22:20:07.403436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.02637ms"
	I1119 22:20:07.403590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97µs"
	I1119 22:20:08.346162       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:20:08.357372       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vbfhv"
	I1119 22:20:08.366742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.01104ms"
	I1119 22:20:08.373376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.555995ms"
	I1119 22:20:08.373523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.519µs"
	I1119 22:20:22.284386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.689µs"
	I1119 22:20:22.302759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.916µs"
	I1119 22:20:23.804590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.984643ms"
	I1119 22:20:23.825468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.454615ms"
	I1119 22:20:23.825553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.257µs"
	I1119 22:20:26.560333       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [bfde9418adc9d7aba501fe3c84086b7de3e6632fdd8aabb2eb31e57c6302f8a1] <==
	I1119 22:20:08.542091       1 server_others.go:69] "Using iptables proxy"
	I1119 22:20:08.554521       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1119 22:20:08.579485       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:20:08.581958       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:20:08.581998       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:20:08.582008       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:20:08.582058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:20:08.582375       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:20:08.582389       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:20:08.584350       1 config.go:315] "Starting node config controller"
	I1119 22:20:08.584377       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:20:08.584426       1 config.go:188] "Starting service config controller"
	I1119 22:20:08.584459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:20:08.584486       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:20:08.584491       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:20:08.684578       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:20:08.684601       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:20:08.684577       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [814e6989c64319d934f5f210646b29c75985c3fe82e3642066c6cced56537e32] <==
	W1119 22:19:51.558017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:51.558302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:51.557982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 22:19:51.558323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 22:19:51.558217       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:51.558365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.378035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.378068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.502983       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:19:52.503017       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:19:52.577347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:19:52.577387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:19:52.620635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:19:52.620663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:19:52.621642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.621673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.622811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.622838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.655572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 22:19:52.655637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 22:19:52.670809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:19:52.670851       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:19:52.738351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:19:52.738419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 22:19:55.553708       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254431    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2532f4d-a32b-45a0-b846-1d2ecea1f926-lib-modules\") pod \"kindnet-mlzfc\" (UID: \"e2532f4d-a32b-45a0-b846-1d2ecea1f926\") " pod="kube-system/kindnet-mlzfc"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254510    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnz9\" (UniqueName: \"kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254561    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2532f4d-a32b-45a0-b846-1d2ecea1f926-cni-cfg\") pod \"kindnet-mlzfc\" (UID: \"e2532f4d-a32b-45a0-b846-1d2ecea1f926\") " pod="kube-system/kindnet-mlzfc"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254783    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-proxy\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: I1119 22:20:07.254836    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-xtables-lock\") pod \"kube-proxy-rnxxf\" (UID: \"f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d\") " pod="kube-system/kube-proxy-rnxxf"
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363793    1560 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363834    1560 projected.go:198] Error preparing data for projected volume kube-api-access-rpv66 for pod kube-system/kindnet-mlzfc: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.363943    1560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2532f4d-a32b-45a0-b846-1d2ecea1f926-kube-api-access-rpv66 podName:e2532f4d-a32b-45a0-b846-1d2ecea1f926 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:07.863913255 +0000 UTC m=+13.276094662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rpv66" (UniqueName: "kubernetes.io/projected/e2532f4d-a32b-45a0-b846-1d2ecea1f926-kube-api-access-rpv66") pod "kindnet-mlzfc" (UID: "e2532f4d-a32b-45a0-b846-1d2ecea1f926") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364286    1560 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364311    1560 projected.go:198] Error preparing data for projected volume kube-api-access-9fnz9 for pod kube-system/kube-proxy-rnxxf: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:07 old-k8s-version-975700 kubelet[1560]: E1119 22:20:07.364372    1560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9 podName:f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d nodeName:}" failed. No retries permitted until 2025-11-19 22:20:07.864353345 +0000 UTC m=+13.276534732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9fnz9" (UniqueName: "kubernetes.io/projected/f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d-kube-api-access-9fnz9") pod "kube-proxy-rnxxf" (UID: "f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:08 old-k8s-version-975700 kubelet[1560]: I1119 22:20:08.753381    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rnxxf" podStartSLOduration=1.753327393 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:08.753080476 +0000 UTC m=+14.165261906" watchObservedRunningTime="2025-11-19 22:20:08.753327393 +0000 UTC m=+14.165508800"
	Nov 19 22:20:12 old-k8s-version-975700 kubelet[1560]: I1119 22:20:12.861606    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mlzfc" podStartSLOduration=2.782502482 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="2025-11-19 22:20:08.564687803 +0000 UTC m=+13.976869202" lastFinishedPulling="2025-11-19 22:20:11.643733018 +0000 UTC m=+17.055914418" observedRunningTime="2025-11-19 22:20:12.861400313 +0000 UTC m=+18.273581719" watchObservedRunningTime="2025-11-19 22:20:12.861547698 +0000 UTC m=+18.273729104"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.261744    1560 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.283141    1560 topology_manager.go:215] "Topology Admit Handler" podUID="6c937194-8889-47a0-b05f-7af799e18044" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.284839    1560 topology_manager.go:215] "Topology Admit Handler" podUID="a4057bf2-fe2e-42db-83e9-bc625724c61c" podNamespace="kube-system" podName="coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.465780    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjsb\" (UniqueName: \"kubernetes.io/projected/6c937194-8889-47a0-b05f-7af799e18044-kube-api-access-xbjsb\") pod \"storage-provisioner\" (UID: \"6c937194-8889-47a0-b05f-7af799e18044\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.465975    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7zm\" (UniqueName: \"kubernetes.io/projected/a4057bf2-fe2e-42db-83e9-bc625724c61c-kube-api-access-zd7zm\") pod \"coredns-5dd5756b68-8hdh7\" (UID: \"a4057bf2-fe2e-42db-83e9-bc625724c61c\") " pod="kube-system/coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.466031    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c937194-8889-47a0-b05f-7af799e18044-tmp\") pod \"storage-provisioner\" (UID: \"6c937194-8889-47a0-b05f-7af799e18044\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.466065    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4057bf2-fe2e-42db-83e9-bc625724c61c-config-volume\") pod \"coredns-5dd5756b68-8hdh7\" (UID: \"a4057bf2-fe2e-42db-83e9-bc625724c61c\") " pod="kube-system/coredns-5dd5756b68-8hdh7"
	Nov 19 22:20:22 old-k8s-version-975700 kubelet[1560]: I1119 22:20:22.790518    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.790461437 podCreationTimestamp="2025-11-19 22:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:22.789226683 +0000 UTC m=+28.201408091" watchObservedRunningTime="2025-11-19 22:20:22.790461437 +0000 UTC m=+28.202642846"
	Nov 19 22:20:23 old-k8s-version-975700 kubelet[1560]: I1119 22:20:23.794502    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8hdh7" podStartSLOduration=16.794448045 podCreationTimestamp="2025-11-19 22:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:23.792204756 +0000 UTC m=+29.204386163" watchObservedRunningTime="2025-11-19 22:20:23.794448045 +0000 UTC m=+29.206629453"
	Nov 19 22:20:25 old-k8s-version-975700 kubelet[1560]: I1119 22:20:25.822716    1560 topology_manager.go:215] "Topology Admit Handler" podUID="b49caea0-80e8-4473-ac1f-f9bd327c3754" podNamespace="default" podName="busybox"
	Nov 19 22:20:25 old-k8s-version-975700 kubelet[1560]: I1119 22:20:25.990052    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87p55\" (UniqueName: \"kubernetes.io/projected/b49caea0-80e8-4473-ac1f-f9bd327c3754-kube-api-access-87p55\") pod \"busybox\" (UID: \"b49caea0-80e8-4473-ac1f-f9bd327c3754\") " pod="default/busybox"
	Nov 19 22:20:28 old-k8s-version-975700 kubelet[1560]: I1119 22:20:28.806269    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.691001227 podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="2025-11-19 22:20:26.263867005 +0000 UTC m=+31.676048399" lastFinishedPulling="2025-11-19 22:20:28.379090043 +0000 UTC m=+33.791271442" observedRunningTime="2025-11-19 22:20:28.805872451 +0000 UTC m=+34.218053858" watchObservedRunningTime="2025-11-19 22:20:28.80622427 +0000 UTC m=+34.218405676"
	
	
	==> storage-provisioner [537c778c87f9d8c20894001938b5632c0e5dcc6b1095fb4d266fd4b3995811b2] <==
	I1119 22:20:22.762742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:20:22.772216       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:20:22.772484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:20:22.782676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:20:22.782729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"750e6d2d-dbb6-45a4-b78a-de5bffe0f948", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca became leader
	I1119 22:20:22.782814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca!
	I1119 22:20:22.883137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-975700_aeb53126-798f-4b08-be45-abf6358cfbca!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975700 -n old-k8s-version-975700
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-975700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-638439 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7de716fc-5cc0-401e-af15-e754abb3f8ee] Pending
helpers_test.go:352: "busybox" [7de716fc-5cc0-401e-af15-e754abb3f8ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7de716fc-5cc0-401e-af15-e754abb3f8ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004361715s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-638439 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-638439
helpers_test.go:243: (dbg) docker inspect no-preload-638439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f",
	        "Created": "2025-11-19T22:19:50.066386297Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:19:50.106148209Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/hosts",
	        "LogPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f-json.log",
	        "Name": "/no-preload-638439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-638439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-638439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f",
	                "LowerDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-638439",
	                "Source": "/var/lib/docker/volumes/no-preload-638439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-638439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-638439",
	                "name.minikube.sigs.k8s.io": "no-preload-638439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6468eaa6e58815ec1a8ce4eef75bd1d1183671d7d0f0969ca0d0d7197bcd337c",
	            "SandboxKey": "/var/run/docker/netns/6468eaa6e588",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-638439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "704e557aea6b4b1b4f015ac501c682b6edea96a04c5ccb3e1b740fcfc4233bcd",
	                    "EndpointID": "725c5e37d39d816ed8bc0698b36833d09a0bbafb80fc6b01e76122045fed421c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:8a:6e:8d:e0:e9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-638439",
	                        "4ff4bb9a387d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-638439 -n no-preload-638439
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-638439 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-638439 logs -n 25: (1.23725036s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-904997 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo containerd config dump                                                                                                                                                                                                        │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo crio config                                                                                                                                                                                                                   │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ delete  │ -p cilium-904997                                                                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:18 UTC │
	│ start   │ -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ ssh     │ force-systemd-flag-635885 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p force-systemd-flag-635885                                                                                                                                                                                                                        │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ stop    │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ delete  │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ ssh     │ cert-options-071115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cert-options-071115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p cert-options-071115                                                                                                                                                                                                                              │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439         │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-975700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p old-k8s-version-975700 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:19:48
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:19:48.990275  248121 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:19:48.990406  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990419  248121 out.go:374] Setting ErrFile to fd 2...
	I1119 22:19:48.990423  248121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:48.990627  248121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:19:48.991193  248121 out.go:368] Setting JSON to false
	I1119 22:19:48.992321  248121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3729,"bootTime":1763587060,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:19:48.992426  248121 start.go:143] virtualization: kvm guest
	I1119 22:19:48.994475  248121 out.go:179] * [no-preload-638439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:19:48.995854  248121 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:19:48.995867  248121 notify.go:221] Checking for updates...
	I1119 22:19:48.998724  248121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:19:49.000141  248121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:19:49.004556  248121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:19:49.005782  248121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:19:49.006906  248121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:19:49.008438  248121 config.go:182] Loaded profile config "cert-expiration-207460": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008559  248121 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:49.008672  248121 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:19:49.008773  248121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:19:49.032838  248121 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:19:49.032973  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.090138  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.078907682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.090254  248121 docker.go:319] overlay module found
	I1119 22:19:49.091878  248121 out.go:179] * Using the docker driver based on user configuration
	I1119 22:19:49.093038  248121 start.go:309] selected driver: docker
	I1119 22:19:49.093053  248121 start.go:930] validating driver "docker" against <nil>
	I1119 22:19:49.093064  248121 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:19:49.093625  248121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:49.156775  248121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:19:49.145211302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:19:49.157058  248121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:19:49.157439  248121 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:19:49.159270  248121 out.go:179] * Using Docker driver with root privileges
	I1119 22:19:49.160689  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:19:49.160762  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:49.160776  248121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:19:49.160859  248121 start.go:353] cluster config:
	{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:19:49.162538  248121 out.go:179] * Starting "no-preload-638439" primary control-plane node in "no-preload-638439" cluster
	I1119 22:19:49.165506  248121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:19:49.166733  248121 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:19:49.168220  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:49.168286  248121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:19:49.168353  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:49.168395  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json: {Name:mk80aa81bbdb1209c6edea855d376fb83f4d3158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:19:49.168457  248121 cache.go:107] acquiring lock: {Name:mk3047e241e868539f7fa71732db2494bd5accac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168492  248121 cache.go:107] acquiring lock: {Name:mkfa0cff605af699ff39a13e0c5b50d01194658e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168527  248121 cache.go:107] acquiring lock: {Name:mk97f6c43b208e1a8e4ae123374c490c517b3f77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168548  248121 cache.go:115] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:19:49.168561  248121 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.881µs
	I1119 22:19:49.168577  248121 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:19:49.168586  248121 cache.go:107] acquiring lock: {Name:mk95307f4a2dfa9e7a1dbc92b6b01bf8659d9b13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168623  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.168652  248121 cache.go:107] acquiring lock: {Name:mk07d9df97c614ffb0affecc21609079d8bc04b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.168677  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.168687  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.168749  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.169004  248121 cache.go:107] acquiring lock: {Name:mk5d2dd3f2b18e53fa90921f4c0c75406a912168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.169610  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.169116  248121 cache.go:107] acquiring lock: {Name:mkabd0eddb0cd66931eabcbabac2ddbe82464607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170495  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.169136  248121 cache.go:107] acquiring lock: {Name:mkc18e74e5d25fdb795ed308cf7ce3da142a9be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.170703  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.171552  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:49.171558  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:49.171569  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:49.171576  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:49.172459  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:49.172478  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:49.172507  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:49.200114  248121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:19:49.200187  248121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:19:49.200226  248121 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:19:49.200265  248121 start.go:360] acquireMachinesLock for no-preload-638439: {Name:mk6b4dc7fd24c69d9288f594d61933b094ed5442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:19:49.200436  248121 start.go:364] duration metric: took 142.192µs to acquireMachinesLock for "no-preload-638439"
	I1119 22:19:49.200608  248121 start.go:93] Provisioning new machine with config: &{Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:19:49.200727  248121 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:19:46.119049  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:46.119476  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:46.119522  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:46.119566  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:46.151572  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.151601  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.151607  216336 cri.go:89] found id: ""
	I1119 22:19:46.151617  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:46.151687  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.155958  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.160473  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:46.160530  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:46.191589  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.191612  216336 cri.go:89] found id: ""
	I1119 22:19:46.191619  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:46.191670  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.196383  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:46.196437  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:46.225509  216336 cri.go:89] found id: ""
	I1119 22:19:46.225529  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.225540  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:46.225546  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:46.225599  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:46.254866  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.254913  216336 cri.go:89] found id: ""
	I1119 22:19:46.254924  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:46.254979  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.259701  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:46.259765  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:46.292564  216336 cri.go:89] found id: ""
	I1119 22:19:46.292591  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.292601  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:46.292608  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:46.292667  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:46.329564  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.329596  216336 cri.go:89] found id: ""
	I1119 22:19:46.329606  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:46.329667  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:46.335222  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:46.335276  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:46.367004  216336 cri.go:89] found id: ""
	I1119 22:19:46.367028  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.367039  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:46.367047  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:46.367105  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:46.399927  216336 cri.go:89] found id: ""
	I1119 22:19:46.399974  216336 logs.go:282] 0 containers: []
	W1119 22:19:46.399984  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:46.400002  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:46.400017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:46.463044  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:46.463068  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:46.463083  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:46.497691  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:46.497718  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:46.535424  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:46.535455  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:46.575124  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:46.575154  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:46.607742  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:46.607769  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:46.710299  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:46.710332  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:46.724051  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:46.724080  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:46.762457  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:46.762489  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:46.803568  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:46.803601  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:49.354660  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:49.355043  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:49.355109  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:49.355169  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:49.395681  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:49.395705  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.395709  216336 cri.go:89] found id: ""
	I1119 22:19:49.395716  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:49.395781  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.403424  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.410799  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:49.410949  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:49.452918  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:49.452941  216336 cri.go:89] found id: ""
	I1119 22:19:49.452952  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:49.453011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.458252  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:49.458323  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:49.497813  216336 cri.go:89] found id: ""
	I1119 22:19:49.497837  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.497855  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:49.497863  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:49.497929  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:49.533334  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:49.533350  216336 cri.go:89] found id: ""
	I1119 22:19:49.533357  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:49.533399  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.537784  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:49.537858  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:49.568018  216336 cri.go:89] found id: ""
	I1119 22:19:49.568044  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.568056  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:49.568063  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:49.568119  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:49.609525  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:49.609556  216336 cri.go:89] found id: ""
	I1119 22:19:49.609566  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:49.609626  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:49.616140  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:49.616211  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:49.655231  216336 cri.go:89] found id: ""
	I1119 22:19:49.655262  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.655272  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:49.655279  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:49.655333  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:49.689095  216336 cri.go:89] found id: ""
	I1119 22:19:49.689153  216336 logs.go:282] 0 containers: []
	W1119 22:19:49.689165  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:49.689184  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:49.689221  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:49.810665  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:49.810701  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:49.901949  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:49.901999  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:49.902017  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:49.959095  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:49.959128  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:50.003553  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:50.003592  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:50.058586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:50.058623  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:50.074307  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:50.074340  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:50.111045  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:50.111081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:50.150599  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:50.150632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:50.185189  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:50.185216  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:48.204748  244005 out.go:252]   - Booting up control plane ...
	I1119 22:19:48.204897  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:19:48.205005  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:19:48.206240  244005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:19:48.231808  244005 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:19:48.232853  244005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:19:48.232929  244005 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:19:48.338373  244005 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:19:49.203330  248121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:19:49.203668  248121 start.go:159] libmachine.API.Create for "no-preload-638439" (driver="docker")
	I1119 22:19:49.203755  248121 client.go:173] LocalClient.Create starting
	I1119 22:19:49.203905  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem
	I1119 22:19:49.203977  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204016  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204103  248121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem
	I1119 22:19:49.204159  248121 main.go:143] libmachine: Decoding PEM data...
	I1119 22:19:49.204190  248121 main.go:143] libmachine: Parsing certificate...
	I1119 22:19:49.204684  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:19:49.233073  248121 cli_runner.go:211] docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:19:49.233150  248121 network_create.go:284] running [docker network inspect no-preload-638439] to gather additional debugging logs...
	I1119 22:19:49.233181  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439
	W1119 22:19:49.260692  248121 cli_runner.go:211] docker network inspect no-preload-638439 returned with exit code 1
	I1119 22:19:49.260724  248121 network_create.go:287] error running [docker network inspect no-preload-638439]: docker network inspect no-preload-638439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-638439 not found
	I1119 22:19:49.260740  248121 network_create.go:289] output of [docker network inspect no-preload-638439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-638439 not found
	
	** /stderr **
	I1119 22:19:49.260835  248121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:49.281699  248121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
	I1119 22:19:49.282496  248121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-474134d72c89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:14:41:ce:21:e4} reservation:<nil>}
	I1119 22:19:49.283428  248121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-527206f47d61 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:ef:fd:4c:e4:1b} reservation:<nil>}
	I1119 22:19:49.284394  248121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac16fd64007f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:dc:21:09:78:e5} reservation:<nil>}
	I1119 22:19:49.285073  248121 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11547e9c7cf3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:49:21:10:91:74} reservation:<nil>}
	I1119 22:19:49.286118  248121 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e025fa4e3e96 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:19:71:ce:4a:3c} reservation:<nil>}
	I1119 22:19:49.287275  248121 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e92190}
	I1119 22:19:49.287353  248121 network_create.go:124] attempt to create docker network no-preload-638439 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1119 22:19:49.287448  248121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-638439 no-preload-638439
	I1119 22:19:49.349621  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:49.349748  248121 network_create.go:108] docker network no-preload-638439 192.168.103.0/24 created
	I1119 22:19:49.349780  248121 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-638439" container
	I1119 22:19:49.349859  248121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:19:49.350149  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:49.361305  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:49.363150  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:49.375619  248121 cli_runner.go:164] Run: docker volume create no-preload-638439 --label name.minikube.sigs.k8s.io=no-preload-638439 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:19:49.389385  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:49.396358  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:49.402036  248121 oci.go:103] Successfully created a docker volume no-preload-638439
	I1119 22:19:49.402119  248121 cli_runner.go:164] Run: docker run --rm --name no-preload-638439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --entrypoint /usr/bin/test -v no-preload-638439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:19:49.404338  248121 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:49.471774  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:19:49.471808  248121 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 303.216742ms
	I1119 22:19:49.471832  248121 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:19:49.854076  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:19:49.854102  248121 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 685.635122ms
	I1119 22:19:49.854114  248121 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:19:49.969965  248121 oci.go:107] Successfully prepared a docker volume no-preload-638439
	I1119 22:19:49.970027  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1119 22:19:49.970211  248121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:19:49.970251  248121 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:19:49.970298  248121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:19:50.046746  248121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-638439 --name no-preload-638439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-638439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-638439 --network no-preload-638439 --ip 192.168.103.2 --volume no-preload-638439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:19:50.374513  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Running}}
	I1119 22:19:50.397354  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.420153  248121 cli_runner.go:164] Run: docker exec no-preload-638439 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:19:50.480826  248121 oci.go:144] the created container "no-preload-638439" has a running status.
	I1119 22:19:50.480855  248121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa...
	I1119 22:19:50.741014  248121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:19:50.777653  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.805773  248121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:19:50.805802  248121 kic_runner.go:114] Args: [docker exec --privileged no-preload-638439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:19:50.864742  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:19:50.878812  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:19:50.878846  248121 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.709887948s
	I1119 22:19:50.878866  248121 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:19:50.883024  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:19:50.883052  248121 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.714530905s
	I1119 22:19:50.883067  248121 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:19:50.889090  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:19:50.889119  248121 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.72053761s
	I1119 22:19:50.889134  248121 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:19:50.890545  248121 machine.go:94] provisionDockerMachine start ...
	I1119 22:19:50.890654  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:50.917029  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:50.917372  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:50.917394  248121 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:19:50.918143  248121 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41082->127.0.0.1:33063: read: connection reset by peer
	I1119 22:19:50.954753  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:19:50.954786  248121 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.785730546s
	I1119 22:19:50.954801  248121 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:19:51.295575  248121 cache.go:157] /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:19:51.295602  248121 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.126530323s
	I1119 22:19:51.295614  248121 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:19:51.295629  248121 cache.go:87] Successfully saved all images to host disk.
	I1119 22:19:53.340728  244005 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002509 seconds
	I1119 22:19:53.340920  244005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:19:53.353852  244005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:19:53.877436  244005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:19:53.877630  244005 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-975700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:19:54.388156  244005 kubeadm.go:319] [bootstrap-token] Using token: cb0uuv.ole7whobrm4tnmeu
	I1119 22:19:54.389814  244005 out.go:252]   - Configuring RBAC rules ...
	I1119 22:19:54.389996  244005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:19:54.396226  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:19:54.404040  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:19:54.407336  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:19:54.410095  244005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:19:54.412761  244005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:19:54.424912  244005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:19:54.627091  244005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:19:54.803149  244005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:19:54.807538  244005 kubeadm.go:319] 
	I1119 22:19:54.807624  244005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:19:54.807631  244005 kubeadm.go:319] 
	I1119 22:19:54.807719  244005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:19:54.807724  244005 kubeadm.go:319] 
	I1119 22:19:54.807753  244005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:19:54.807821  244005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:19:54.807898  244005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:19:54.807905  244005 kubeadm.go:319] 
	I1119 22:19:54.807968  244005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:19:54.807973  244005 kubeadm.go:319] 
	I1119 22:19:54.808037  244005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:19:54.808042  244005 kubeadm.go:319] 
	I1119 22:19:54.808105  244005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:19:54.808197  244005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:19:54.808278  244005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:19:54.808283  244005 kubeadm.go:319] 
	I1119 22:19:54.808378  244005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:19:54.808482  244005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:19:54.808488  244005 kubeadm.go:319] 
	I1119 22:19:54.808581  244005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.808697  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:19:54.808745  244005 kubeadm.go:319] 	--control-plane 
	I1119 22:19:54.808753  244005 kubeadm.go:319] 
	I1119 22:19:54.808860  244005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:19:54.808867  244005 kubeadm.go:319] 
	I1119 22:19:54.808978  244005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cb0uuv.ole7whobrm4tnmeu \
	I1119 22:19:54.809119  244005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:19:54.812703  244005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:19:54.812825  244005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:19:54.812852  244005 cni.go:84] Creating CNI manager for ""
	I1119 22:19:54.812906  244005 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:19:54.814910  244005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:19:52.733247  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:52.733770  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:52.733821  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:52.733900  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:52.766790  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:52.766819  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:52.766824  216336 cri.go:89] found id: ""
	I1119 22:19:52.766834  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:52.766917  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.771725  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.776283  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:52.776357  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:52.808152  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:52.808179  216336 cri.go:89] found id: ""
	I1119 22:19:52.808190  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:52.808260  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.812851  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:52.812954  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:52.844459  216336 cri.go:89] found id: ""
	I1119 22:19:52.844483  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.844492  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:52.844499  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:52.844560  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:52.875911  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:52.875939  216336 cri.go:89] found id: ""
	I1119 22:19:52.875948  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:52.876008  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.880449  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:52.880526  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:52.913101  216336 cri.go:89] found id: ""
	I1119 22:19:52.913139  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.913150  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:52.913158  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:52.913240  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:52.945143  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:52.945172  216336 cri.go:89] found id: ""
	I1119 22:19:52.945182  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:52.945240  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:52.949921  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:52.950006  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:52.984180  216336 cri.go:89] found id: ""
	I1119 22:19:52.984214  216336 logs.go:282] 0 containers: []
	W1119 22:19:52.984225  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:52.984233  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:52.984296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:53.016636  216336 cri.go:89] found id: ""
	I1119 22:19:53.016661  216336 logs.go:282] 0 containers: []
	W1119 22:19:53.016671  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:53.016691  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:53.016707  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:53.053700  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:53.053730  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:53.088889  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:53.088922  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:53.104350  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:53.104378  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:53.165418  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:53.165442  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:53.165460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:53.197214  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:53.197252  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:53.228109  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:53.228145  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:53.261694  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:53.261727  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:53.302850  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:53.302891  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:53.333442  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:53.333466  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:54.046074  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.046106  248121 ubuntu.go:182] provisioning hostname "no-preload-638439"
	I1119 22:19:54.046172  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.065777  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.066044  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.066060  248121 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-638439 && echo "no-preload-638439" | sudo tee /etc/hostname
	I1119 22:19:54.204707  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-638439
	
	I1119 22:19:54.204779  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.223401  248121 main.go:143] libmachine: Using SSH client type: native
	I1119 22:19:54.223669  248121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1119 22:19:54.223696  248121 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-638439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-638439/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-638439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:19:54.352178  248121 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:19:54.352206  248121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:19:54.352222  248121 ubuntu.go:190] setting up certificates
	I1119 22:19:54.352230  248121 provision.go:84] configureAuth start
	I1119 22:19:54.352301  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.371286  248121 provision.go:143] copyHostCerts
	I1119 22:19:54.371354  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:19:54.371370  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:19:54.371451  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:19:54.371570  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:19:54.371582  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:19:54.371623  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:19:54.371701  248121 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:19:54.371710  248121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:19:54.371748  248121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:19:54.371818  248121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.no-preload-638439 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-638439]
	I1119 22:19:54.471021  248121 provision.go:177] copyRemoteCerts
	I1119 22:19:54.471092  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:19:54.471126  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.492235  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.594331  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:19:54.619378  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:19:54.640347  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:19:54.663269  248121 provision.go:87] duration metric: took 311.007703ms to configureAuth
	I1119 22:19:54.663306  248121 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:19:54.663514  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:19:54.663528  248121 machine.go:97] duration metric: took 3.772952055s to provisionDockerMachine
	I1119 22:19:54.663538  248121 client.go:176] duration metric: took 5.459757711s to LocalClient.Create
	I1119 22:19:54.663558  248121 start.go:167] duration metric: took 5.459889493s to libmachine.API.Create "no-preload-638439"
	I1119 22:19:54.663572  248121 start.go:293] postStartSetup for "no-preload-638439" (driver="docker")
	I1119 22:19:54.663584  248121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:19:54.663643  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:19:54.663702  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.693309  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.794533  248121 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:19:54.799614  248121 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:19:54.799652  248121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:19:54.799667  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:19:54.799750  248121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:19:54.799853  248121 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:19:54.800010  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:19:54.811703  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:19:54.833815  248121 start.go:296] duration metric: took 170.228401ms for postStartSetup
	I1119 22:19:54.834269  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.855648  248121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/config.json ...
	I1119 22:19:54.855997  248121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:54.856065  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.875839  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:54.971298  248121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:19:54.976558  248121 start.go:128] duration metric: took 5.775804384s to createHost
	I1119 22:19:54.976584  248121 start.go:83] releasing machines lock for "no-preload-638439", held for 5.775996243s
	I1119 22:19:54.976652  248121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-638439
	I1119 22:19:54.996323  248121 ssh_runner.go:195] Run: cat /version.json
	I1119 22:19:54.996379  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:54.996397  248121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:19:54.996468  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:19:55.015498  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.015796  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:19:55.110222  248121 ssh_runner.go:195] Run: systemctl --version
	I1119 22:19:55.167157  248121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:19:55.172373  248121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:19:55.172445  248121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:19:55.200823  248121 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:19:55.200849  248121 start.go:496] detecting cgroup driver to use...
	I1119 22:19:55.200917  248121 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:19:55.200971  248121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:19:55.216429  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:19:55.230198  248121 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:19:55.230259  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:19:55.247760  248121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:19:55.266193  248121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:19:55.355176  248121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:19:55.456550  248121 docker.go:234] disabling docker service ...
	I1119 22:19:55.456609  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:19:55.479653  248121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:19:55.493533  248121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:19:55.592560  248121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:19:55.702080  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:19:55.719351  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:19:55.735307  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:19:55.748222  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:19:55.759552  248121 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:19:55.759604  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:19:55.771633  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.782179  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:19:55.791940  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:19:55.801486  248121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:19:55.810671  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:19:55.820637  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:19:55.830057  248121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:19:55.839605  248121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:19:55.847930  248121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:19:55.856300  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:19:55.943868  248121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:19:56.031481  248121 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:19:56.031555  248121 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:19:56.036560  248121 start.go:564] Will wait 60s for crictl version
	I1119 22:19:56.036619  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.040772  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:19:56.068661  248121 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:19:56.068728  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.092486  248121 ssh_runner.go:195] Run: containerd --version
	I1119 22:19:56.118002  248121 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:19:54.816277  244005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:19:54.820558  244005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:19:54.820581  244005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:19:54.833857  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:19:55.525202  244005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:19:55.525370  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:55.525485  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-975700 minikube.k8s.io/updated_at=2025_11_19T22_19_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-975700 minikube.k8s.io/primary=true
	I1119 22:19:55.543472  244005 ops.go:34] apiserver oom_adj: -16
	I1119 22:19:55.632765  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.133706  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.632860  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:57.133046  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:56.119594  248121 cli_runner.go:164] Run: docker network inspect no-preload-638439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:19:56.139074  248121 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:19:56.143662  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:19:56.156640  248121 kubeadm.go:884] updating cluster {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:19:56.156774  248121 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:19:56.156835  248121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:19:56.185228  248121 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:19:56.185258  248121 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 22:19:56.185326  248121 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.185359  248121 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.185391  248121 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.185403  248121 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.185415  248121 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.185453  248121 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.185334  248121 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.185400  248121 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.186856  248121 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.186874  248121 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.186979  248121 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.187070  248121 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.187094  248121 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.187129  248121 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.187150  248121 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.332716  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1119 22:19:56.332783  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.332809  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1119 22:19:56.332864  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.335699  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1119 22:19:56.335755  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.343400  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1119 22:19:56.343484  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.354423  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1119 22:19:56.354489  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.357606  248121 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1119 22:19:56.357630  248121 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1119 22:19:56.357659  248121 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.357662  248121 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.357709  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.359708  248121 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1119 22:19:56.359750  248121 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.359792  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.365141  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1119 22:19:56.365211  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1119 22:19:56.370262  248121 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1119 22:19:56.370317  248121 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.370368  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.380909  248121 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1119 22:19:56.380976  248121 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.381006  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.381021  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.381050  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.381079  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.387736  248121 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1119 22:19:56.387826  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.388049  248121 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1119 22:19:56.388093  248121 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 22:19:56.388134  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.388139  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.388097  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.419491  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.419632  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.422653  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.424802  248121 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1119 22:19:56.424851  248121 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.424918  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.426559  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.426657  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.426745  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.457323  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:19:56.459754  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.459823  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:19:56.459928  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:19:56.464385  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:19:56.464524  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.464526  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:19:56.499739  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:19:56.499837  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:56.504038  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:19:56.504120  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:19:56.504047  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.504087  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:19:56.504256  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:56.507722  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:19:56.507817  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:19:56.507959  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:19:56.508035  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 22:19:56.508064  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1119 22:19:56.508205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:19:56.508348  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.515236  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 22:19:56.515270  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1119 22:19:56.555985  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 22:19:56.556025  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1119 22:19:56.556078  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:19:56.556101  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 22:19:56.556122  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1119 22:19:56.571156  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 22:19:56.571205  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:19:56.571220  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1119 22:19:56.571322  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.646952  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:19:56.646960  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 22:19:56.646995  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1119 22:19:56.647066  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:19:56.713984  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 22:19:56.714047  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1119 22:19:56.738791  248121 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.738923  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1119 22:19:56.888282  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1119 22:19:56.888324  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:56.888394  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:19:57.461211  248121 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1119 22:19:57.461286  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982686  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.094253154s)
	I1119 22:19:57.982716  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 22:19:57.982712  248121 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1119 22:19:57.982738  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982764  248121 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:57.982789  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:19:57.982801  248121 ssh_runner.go:195] Run: which crictl
	I1119 22:19:58.943228  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 22:19:58.943276  248121 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943321  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:19:58.943326  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:19:55.919868  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:55.920354  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:55.920400  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:55.920445  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:55.949031  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:55.949059  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:55.949065  216336 cri.go:89] found id: ""
	I1119 22:19:55.949074  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:55.949133  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.953108  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.957378  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:55.957442  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:55.987066  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:55.987094  216336 cri.go:89] found id: ""
	I1119 22:19:55.987104  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:55.987165  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:55.991215  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:55.991296  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:56.020982  216336 cri.go:89] found id: ""
	I1119 22:19:56.021011  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.021022  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:56.021031  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:56.021093  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:56.051114  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.051138  216336 cri.go:89] found id: ""
	I1119 22:19:56.051147  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:56.051210  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.056071  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:56.056142  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:56.085375  216336 cri.go:89] found id: ""
	I1119 22:19:56.085398  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.085405  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:56.085414  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:56.085457  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:56.114914  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.114941  216336 cri.go:89] found id: ""
	I1119 22:19:56.114951  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:56.115011  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:56.119718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:56.119785  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:56.148992  216336 cri.go:89] found id: ""
	I1119 22:19:56.149019  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.149029  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:56.149037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:56.149096  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:56.179135  216336 cri.go:89] found id: ""
	I1119 22:19:56.179163  216336 logs.go:282] 0 containers: []
	W1119 22:19:56.179173  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:56.179190  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:56.179204  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:56.216379  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:56.216409  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:56.252073  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:56.252103  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:56.283542  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:56.283567  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:56.381327  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:56.381359  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:56.399981  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:56.400019  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:56.493857  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:56.493894  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:56.493913  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:56.537441  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:56.537473  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:56.590041  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:56.590076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:56.633876  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:56.633925  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.179328  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:19:59.179856  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:19:59.179947  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:19:59.180012  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:19:59.213304  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.213329  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.213336  216336 cri.go:89] found id: ""
	I1119 22:19:59.213346  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:19:59.213410  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.218953  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.223649  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:19:59.223722  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:19:59.256070  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.256133  216336 cri.go:89] found id: ""
	I1119 22:19:59.256144  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:19:59.256211  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.261436  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:19:59.261514  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:19:59.294827  216336 cri.go:89] found id: ""
	I1119 22:19:59.294854  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.294864  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:19:59.294871  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:19:59.294944  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:19:59.328052  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.328078  216336 cri.go:89] found id: ""
	I1119 22:19:59.328087  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:19:59.328148  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.333661  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:19:59.333745  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:19:59.367498  216336 cri.go:89] found id: ""
	I1119 22:19:59.367525  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.367534  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:19:59.367543  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:19:59.367601  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:19:59.401843  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.401868  216336 cri.go:89] found id: ""
	I1119 22:19:59.401877  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:19:59.401982  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:19:59.406399  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:19:59.406473  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:19:59.437867  216336 cri.go:89] found id: ""
	I1119 22:19:59.437948  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.437957  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:19:59.437963  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:19:59.438041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:19:59.465826  216336 cri.go:89] found id: ""
	I1119 22:19:59.465856  216336 logs.go:282] 0 containers: []
	W1119 22:19:59.465866  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:19:59.465905  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:19:59.465953  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:19:59.498633  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:19:59.498670  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:19:59.586643  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:19:59.586677  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:19:59.602123  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:19:59.602148  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:19:59.668657  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:19:59.668675  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:19:59.668702  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:19:59.705026  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:19:59.705060  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:19:59.741520  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:19:59.741550  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:19:59.780920  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:19:59.780952  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:19:59.819532  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:19:59.819572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:19:59.861394  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:19:59.861428  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:19:57.633270  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.133177  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:58.633156  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.133958  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:19:59.632816  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.133904  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.633510  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.132810  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:01.632963  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:02.132866  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:00.209856  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.266503638s)
	I1119 22:20:00.209924  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 22:20:00.209943  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.266589504s)
	I1119 22:20:00.209953  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:00.210022  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:00.210039  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:20:01.315659  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.105588091s)
	I1119 22:20:01.315688  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 22:20:01.315709  248121 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315726  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.105675845s)
	I1119 22:20:01.315757  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:20:01.315796  248121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:02.564406  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.248612967s)
	I1119 22:20:02.564435  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 22:20:02.564452  248121 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.248631025s)
	I1119 22:20:02.564470  248121 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564502  248121 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 22:20:02.564519  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:20:02.564590  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:02.568829  248121 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 22:20:02.568862  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 22:20:02.417703  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:02.418103  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:02.418159  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:02.418203  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:02.450244  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.450266  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.450271  216336 cri.go:89] found id: ""
	I1119 22:20:02.450280  216336 logs.go:282] 2 containers: [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:02.450336  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.455477  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.460188  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:02.460263  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:02.491317  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.491341  216336 cri.go:89] found id: ""
	I1119 22:20:02.491351  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:02.491409  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.495754  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:02.495837  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:02.526395  216336 cri.go:89] found id: ""
	I1119 22:20:02.526421  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.526433  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:02.526441  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:02.526509  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:02.556596  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:02.556619  216336 cri.go:89] found id: ""
	I1119 22:20:02.556629  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:02.556686  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.561029  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:02.561102  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:02.593442  216336 cri.go:89] found id: ""
	I1119 22:20:02.593468  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.593480  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:02.593488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:02.593547  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:02.626155  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.626181  216336 cri.go:89] found id: ""
	I1119 22:20:02.626191  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:02.626239  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:02.630831  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:02.630910  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:02.663060  216336 cri.go:89] found id: ""
	I1119 22:20:02.663088  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.663098  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:02.663106  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:02.663159  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:02.692104  216336 cri.go:89] found id: ""
	I1119 22:20:02.692132  216336 logs.go:282] 0 containers: []
	W1119 22:20:02.692142  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:02.692159  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:02.692172  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:02.730157  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:02.730198  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:02.764408  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:02.764435  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:02.871409  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:02.871460  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:02.912737  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:02.912778  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:02.958177  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:02.958229  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:03.003908  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:03.003950  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:03.062041  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:03.062076  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:03.080938  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:03.080972  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:03.153154  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:03.153177  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:03.153191  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:02.633509  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.132907  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:03.633598  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.133836  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:04.632911  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.133740  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:05.633397  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.133422  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:06.633053  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.133122  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.632971  244005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:07.709877  244005 kubeadm.go:1114] duration metric: took 12.184544724s to wait for elevateKubeSystemPrivileges
	I1119 22:20:07.709929  244005 kubeadm.go:403] duration metric: took 23.328681682s to StartCluster
	I1119 22:20:07.709949  244005 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.710024  244005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:07.711281  244005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.726769  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:07.726909  244005 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:07.727036  244005 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:20:07.727028  244005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:07.727107  244005 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727154  244005 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-975700"
	I1119 22:20:07.727201  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.727269  244005 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-975700"
	I1119 22:20:07.727331  244005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-975700"
	I1119 22:20:07.727652  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.727759  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.759624  244005 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:07.760449  244005 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-975700"
	I1119 22:20:07.760487  244005 host.go:66] Checking if "old-k8s-version-975700" exists ...
	I1119 22:20:07.760848  244005 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:07.781264  244005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.781292  244005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:07.781358  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.790624  244005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:07.790705  244005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.805293  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.811125  244005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:07.811152  244005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:07.811221  244005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-975700
	I1119 22:20:07.839037  244005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/old-k8s-version-975700/id_rsa Username:docker}
	I1119 22:20:07.927378  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:07.930474  244005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.930565  244005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:07.945012  244005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:08.325616  244005 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:08.326981  244005 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:08.525071  244005 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:20:05.409665  248121 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.845117956s)
	I1119 22:20:05.409701  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:20:05.409742  248121 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.409813  248121 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:20:05.827105  248121 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:20:05.827149  248121 cache_images.go:125] Successfully loaded all cached images
	I1119 22:20:05.827155  248121 cache_images.go:94] duration metric: took 9.641883158s to LoadCachedImages
	I1119 22:20:05.827169  248121 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1119 22:20:05.827281  248121 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-638439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:20:05.827350  248121 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:20:05.854538  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:05.854565  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:05.854580  248121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:20:05.854605  248121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-638439 NodeName:no-preload-638439 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:20:05.854728  248121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-638439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:20:05.854794  248121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.863483  248121 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 22:20:05.863536  248121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 22:20:05.871942  248121 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 22:20:05.871968  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 22:20:05.871947  248121 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 22:20:05.872035  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 22:20:05.876399  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 22:20:05.876433  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 22:20:07.043592  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:07.058665  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 22:20:07.063097  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 22:20:07.063136  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 22:20:07.259328  248121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 22:20:07.263904  248121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 22:20:07.263944  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 22:20:07.467537  248121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:20:07.476103  248121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:20:07.489039  248121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:20:07.504456  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1119 22:20:07.517675  248121 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:20:07.521966  248121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:20:07.532448  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:07.616669  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:07.647854  248121 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439 for IP: 192.168.103.2
	I1119 22:20:07.647911  248121 certs.go:195] generating shared ca certs ...
	I1119 22:20:07.647941  248121 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:07.648100  248121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:20:07.648156  248121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:20:07.648169  248121 certs.go:257] generating profile certs ...
	I1119 22:20:07.648233  248121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key
	I1119 22:20:07.648249  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt with IP's: []
	I1119 22:20:08.248835  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt ...
	I1119 22:20:08.248872  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: {Name:mk71551595bc691ff029aa4f22d8136d735c86c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249095  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key ...
	I1119 22:20:08.249107  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.key: {Name:mk7714d393e738013c7abe0f1689bcf490e26b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:08.249250  248121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff
	I1119 22:20:08.249267  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:20:09.018572  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff ...
	I1119 22:20:09.018603  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff: {Name:mk1a2db3ea3ff5c82c4c822f2131fbadbd39c724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018790  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff ...
	I1119 22:20:09.018808  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff: {Name:mk13f089d71bdc7abee8608285249f8ab5ad14b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.018926  248121 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt
	I1119 22:20:09.019033  248121 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key.6e1d1cff -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key
	I1119 22:20:09.019118  248121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key
	I1119 22:20:09.019145  248121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt with IP's: []
	I1119 22:20:09.141320  248121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt ...
	I1119 22:20:09.141353  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt: {Name:mke73db150d5fe88961c2b7ca7e43e6cb8c1e87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141532  248121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key ...
	I1119 22:20:09.141550  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key: {Name:mk65b56a4bcd9d60fdf62f046abf7a5abe0e729f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:09.141750  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:20:09.141799  248121 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:20:09.141812  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:20:09.141845  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:20:09.141894  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:20:09.141928  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:20:09.141984  248121 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:20:09.142554  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:20:09.161569  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:20:09.180990  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:20:09.199264  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:20:09.217135  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:20:09.236364  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:20:09.255084  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:20:09.274604  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:20:09.293451  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:20:09.315834  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:20:09.336567  248121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:20:09.354248  248121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:20:09.367868  248121 ssh_runner.go:195] Run: openssl version
	I1119 22:20:09.374260  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:20:09.383332  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387801  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.387864  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:20:09.424342  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:20:09.433605  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:20:09.442478  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446634  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.446694  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:20:09.481876  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:20:09.491181  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:20:09.499823  248121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.503986  248121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.504043  248121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:20:09.539481  248121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:20:09.548630  248121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:20:09.552649  248121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:20:09.552709  248121 kubeadm.go:401] StartCluster: {Name:no-preload-638439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-638439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:20:09.552800  248121 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:20:09.552841  248121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:20:09.580504  248121 cri.go:89] found id: ""
	I1119 22:20:09.580577  248121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:20:09.588825  248121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:20:09.597263  248121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:20:09.597312  248121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:20:09.605431  248121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:20:09.605448  248121 kubeadm.go:158] found existing configuration files:
	
	I1119 22:20:09.605505  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:20:09.613580  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:20:09.613647  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:20:09.621432  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:20:09.629381  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:20:09.629444  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:20:09.637498  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.645457  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:20:09.645500  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:20:09.653775  248121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:20:09.662581  248121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:20:09.662631  248121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:20:09.670267  248121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:20:09.705969  248121 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:20:09.706049  248121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:20:09.725461  248121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:20:09.725557  248121 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:20:09.725619  248121 kubeadm.go:319] OS: Linux
	I1119 22:20:09.725688  248121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:20:09.725759  248121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:20:09.725823  248121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:20:09.725926  248121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:20:09.726011  248121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:20:09.726090  248121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:20:09.726169  248121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:20:09.726247  248121 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:20:09.785631  248121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:20:09.785785  248121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:20:09.785930  248121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:20:09.790816  248121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:20:05.698391  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:08.526183  244005 addons.go:515] duration metric: took 799.151282ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:20:08.830648  244005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-975700" context rescaled to 1 replicas
	W1119 22:20:10.330548  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:12.330688  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:09.792948  248121 out.go:252]   - Generating certificates and keys ...
	I1119 22:20:09.793051  248121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:20:09.793149  248121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:20:10.738826  248121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:20:10.908170  248121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:20:11.291841  248121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:20:11.623960  248121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:20:11.828384  248121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:20:11.828565  248121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.233215  248121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:20:12.233354  248121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-638439] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:20:12.358552  248121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:20:12.567027  248121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:20:12.649341  248121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:20:12.649468  248121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:20:12.821942  248121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:20:13.184331  248121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:20:13.249251  248121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:20:13.507036  248121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:20:13.992391  248121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:20:13.992949  248121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:20:14.073515  248121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:20:10.699588  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:20:10.699656  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:10.699719  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:10.736721  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:10.736747  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:10.736753  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:10.736758  216336 cri.go:89] found id: ""
	I1119 22:20:10.736767  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:10.736834  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.742155  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.747306  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.752281  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:10.752356  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:10.785664  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:10.785691  216336 cri.go:89] found id: ""
	I1119 22:20:10.785700  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:10.785758  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.791037  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:10.791107  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:10.827690  216336 cri.go:89] found id: ""
	I1119 22:20:10.827736  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.827749  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:10.827781  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:10.827856  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:10.860463  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:10.860489  216336 cri.go:89] found id: ""
	I1119 22:20:10.860499  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:10.860557  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.865818  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:10.865902  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:10.896395  216336 cri.go:89] found id: ""
	I1119 22:20:10.896425  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.896457  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:10.896464  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:10.896524  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:10.927065  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:10.927091  216336 cri.go:89] found id: ""
	I1119 22:20:10.927100  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:10.927157  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:10.931718  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:10.931789  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:10.960849  216336 cri.go:89] found id: ""
	I1119 22:20:10.960892  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.960903  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:10.960910  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:10.960962  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:10.993029  216336 cri.go:89] found id: ""
	I1119 22:20:10.993057  216336 logs.go:282] 0 containers: []
	W1119 22:20:10.993067  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:10.993080  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:10.993094  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:11.027974  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:11.028010  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:11.062086  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:11.062120  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:11.103210  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:11.103250  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:11.145837  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:11.145872  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:11.199841  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:11.199937  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:11.236586  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:11.236618  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:11.253432  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:11.253487  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:11.295903  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:11.295943  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:11.337708  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:11.337745  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:11.452249  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:11.452285  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:14.830008  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:16.830268  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:14.075591  248121 out.go:252]   - Booting up control plane ...
	I1119 22:20:14.075701  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:20:14.075795  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:20:14.076511  248121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:20:14.092600  248121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:20:14.092767  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:20:14.099651  248121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:20:14.099786  248121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:20:14.099865  248121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:20:14.205620  248121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:20:14.205784  248121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:20:14.707136  248121 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.67843ms
	I1119 22:20:14.711176  248121 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:20:14.711406  248121 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:20:14.711556  248121 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:20:14.711669  248121 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:20:16.370429  248121 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.659105526s
	I1119 22:20:16.919263  248121 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.208262146s
	I1119 22:20:18.712413  248121 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001122323s
	I1119 22:20:18.724319  248121 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:20:18.734195  248121 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:20:18.743489  248121 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:20:18.743707  248121 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-638439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:20:18.749843  248121 kubeadm.go:319] [bootstrap-token] Using token: tkvbyg.4blpqvlc8c0koqab
	I1119 22:20:18.751541  248121 out.go:252]   - Configuring RBAC rules ...
	I1119 22:20:18.751647  248121 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:20:18.754347  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:20:18.760461  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:20:18.763019  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:20:18.765434  248121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:20:18.768021  248121 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:20:19.119568  248121 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:20:19.537037  248121 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:20:20.119469  248121 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:20:20.120399  248121 kubeadm.go:319] 
	I1119 22:20:20.120467  248121 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:20:20.120472  248121 kubeadm.go:319] 
	I1119 22:20:20.120605  248121 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:20:20.120632  248121 kubeadm.go:319] 
	I1119 22:20:20.120661  248121 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:20:20.120719  248121 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:20:20.120765  248121 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:20:20.120772  248121 kubeadm.go:319] 
	I1119 22:20:20.120845  248121 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:20:20.120857  248121 kubeadm.go:319] 
	I1119 22:20:20.121004  248121 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:20:20.121029  248121 kubeadm.go:319] 
	I1119 22:20:20.121103  248121 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:20:20.121207  248121 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:20:20.121271  248121 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:20:20.121297  248121 kubeadm.go:319] 
	I1119 22:20:20.121444  248121 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:20:20.121523  248121 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:20:20.121533  248121 kubeadm.go:319] 
	I1119 22:20:20.121611  248121 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.121712  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:20:20.121734  248121 kubeadm.go:319] 	--control-plane 
	I1119 22:20:20.121738  248121 kubeadm.go:319] 
	I1119 22:20:20.121810  248121 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:20:20.121816  248121 kubeadm.go:319] 
	I1119 22:20:20.121927  248121 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkvbyg.4blpqvlc8c0koqab \
	I1119 22:20:20.122034  248121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:20:20.124555  248121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:20:20.124740  248121 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:20:20.124773  248121 cni.go:84] Creating CNI manager for ""
	I1119 22:20:20.124786  248121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:20.127350  248121 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1119 22:20:19.330624  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	W1119 22:20:21.830427  244005 node_ready.go:57] node "old-k8s-version-975700" has "Ready":"False" status (will retry)
	I1119 22:20:22.330516  244005 node_ready.go:49] node "old-k8s-version-975700" is "Ready"
	I1119 22:20:22.330545  244005 node_ready.go:38] duration metric: took 14.003533581s for node "old-k8s-version-975700" to be "Ready" ...
	I1119 22:20:22.330557  244005 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:20:22.330607  244005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:20:22.343206  244005 api_server.go:72] duration metric: took 14.6162161s to wait for apiserver process to appear ...
	I1119 22:20:22.343236  244005 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:20:22.343259  244005 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:20:22.347053  244005 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 22:20:22.348151  244005 api_server.go:141] control plane version: v1.28.0
	I1119 22:20:22.348175  244005 api_server.go:131] duration metric: took 4.933094ms to wait for apiserver health ...
	I1119 22:20:22.348183  244005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:20:22.351821  244005 system_pods.go:59] 8 kube-system pods found
	I1119 22:20:22.351849  244005 system_pods.go:61] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.351854  244005 system_pods.go:61] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.351860  244005 system_pods.go:61] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.351864  244005 system_pods.go:61] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.351869  244005 system_pods.go:61] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.351873  244005 system_pods.go:61] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.351877  244005 system_pods.go:61] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.351892  244005 system_pods.go:61] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.351898  244005 system_pods.go:74] duration metric: took 3.709193ms to wait for pod list to return data ...
	I1119 22:20:22.351906  244005 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:20:22.353863  244005 default_sa.go:45] found service account: "default"
	I1119 22:20:22.353906  244005 default_sa.go:55] duration metric: took 1.968518ms for default service account to be created ...
	I1119 22:20:22.353917  244005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:20:22.356763  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.356787  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.356792  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.356799  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.356803  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.356810  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.356813  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.356817  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.356822  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.356838  244005 retry.go:31] will retry after 295.130955ms: missing components: kube-dns
	I1119 22:20:20.128552  248121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:20:20.133893  248121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:20:20.133928  248121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:20:20.148247  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:20:20.366418  248121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:20:20.366472  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.366530  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-638439 minikube.k8s.io/updated_at=2025_11_19T22_20_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=no-preload-638439 minikube.k8s.io/primary=true
	I1119 22:20:20.472760  248121 ops.go:34] apiserver oom_adj: -16
	I1119 22:20:20.472956  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:20.973815  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.473583  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:21.973622  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.473704  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:22.973336  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.473849  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:23.973455  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.472997  248121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:20:24.537110  248121 kubeadm.go:1114] duration metric: took 4.170685845s to wait for elevateKubeSystemPrivileges
	I1119 22:20:24.537150  248121 kubeadm.go:403] duration metric: took 14.984446293s to StartCluster
	I1119 22:20:24.537173  248121 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.537243  248121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:24.539105  248121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:20:24.539319  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:20:24.539342  248121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:20:24.539397  248121 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:20:24.539519  248121 addons.go:70] Setting storage-provisioner=true in profile "no-preload-638439"
	I1119 22:20:24.539532  248121 config.go:182] Loaded profile config "no-preload-638439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:20:24.539540  248121 addons.go:239] Setting addon storage-provisioner=true in "no-preload-638439"
	I1119 22:20:24.539552  248121 addons.go:70] Setting default-storageclass=true in profile "no-preload-638439"
	I1119 22:20:24.539577  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.539588  248121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-638439"
	I1119 22:20:24.539936  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.540134  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.541288  248121 out.go:179] * Verifying Kubernetes components...
	I1119 22:20:24.543039  248121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:20:24.564207  248121 addons.go:239] Setting addon default-storageclass=true in "no-preload-638439"
	I1119 22:20:24.564253  248121 host.go:66] Checking if "no-preload-638439" exists ...
	I1119 22:20:24.564597  248121 cli_runner.go:164] Run: docker container inspect no-preload-638439 --format={{.State.Status}}
	I1119 22:20:24.564680  248121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:20:24.568527  248121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.568546  248121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:20:24.568596  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.597385  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.599498  248121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.599523  248121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:20:24.599582  248121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-638439
	I1119 22:20:24.624046  248121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/no-preload-638439/id_rsa Username:docker}
	I1119 22:20:24.628608  248121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:20:24.684697  248121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:20:24.711970  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:20:24.742786  248121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:20:24.836401  248121 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:20:24.837864  248121 node_ready.go:35] waiting up to 6m0s for node "no-preload-638439" to be "Ready" ...
	I1119 22:20:25.026785  248121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:20:21.527976  216336 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.075664087s)
	W1119 22:20:21.528025  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1119 22:20:24.028516  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:22.657454  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.657490  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.657499  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.657508  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.657513  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.657520  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.657526  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.657534  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.657541  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:22.657562  244005 retry.go:31] will retry after 290.603952ms: missing components: kube-dns
	I1119 22:20:22.951933  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:22.951963  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:22.951969  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:22.951974  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:22.951978  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:22.951983  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:22.951988  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:22.951992  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:22.951996  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:22.952009  244005 retry.go:31] will retry after 460.674944ms: missing components: kube-dns
	I1119 22:20:23.417271  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.417302  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:23.417309  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.417314  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.417320  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.417326  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.417331  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.417336  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.417341  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.417365  244005 retry.go:31] will retry after 513.116078ms: missing components: kube-dns
	I1119 22:20:23.935257  244005 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:23.935284  244005 system_pods.go:89] "coredns-5dd5756b68-8hdh7" [a4057bf2-fe2e-42db-83e9-bc625724c61c] Running
	I1119 22:20:23.935290  244005 system_pods.go:89] "etcd-old-k8s-version-975700" [12a76858-b7be-4963-8323-fe57ca12a08d] Running
	I1119 22:20:23.935294  244005 system_pods.go:89] "kindnet-mlzfc" [e2532f4d-a32b-45a0-b846-1d2ecea1f926] Running
	I1119 22:20:23.935297  244005 system_pods.go:89] "kube-apiserver-old-k8s-version-975700" [28d03966-c950-4e9c-bbd5-4aeb08bb3363] Running
	I1119 22:20:23.935301  244005 system_pods.go:89] "kube-controller-manager-old-k8s-version-975700" [b2f2d323-34b1-47a7-945e-73086e2e6887] Running
	I1119 22:20:23.935304  244005 system_pods.go:89] "kube-proxy-rnxxf" [f06c0f26-a6bc-4dcb-a9f4-c64b43b4cc1d] Running
	I1119 22:20:23.935308  244005 system_pods.go:89] "kube-scheduler-old-k8s-version-975700" [65c95750-3a2f-4847-a93d-4e54bc709449] Running
	I1119 22:20:23.935311  244005 system_pods.go:89] "storage-provisioner" [6c937194-8889-47a0-b05f-7af799e18044] Running
	I1119 22:20:23.935318  244005 system_pods.go:126] duration metric: took 1.581396028s to wait for k8s-apps to be running ...
	I1119 22:20:23.935324  244005 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:20:23.935362  244005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:23.948529  244005 system_svc.go:56] duration metric: took 13.192475ms WaitForService to wait for kubelet
	I1119 22:20:23.948562  244005 kubeadm.go:587] duration metric: took 16.221575338s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:20:23.948584  244005 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:20:23.951344  244005 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:20:23.951368  244005 node_conditions.go:123] node cpu capacity is 8
	I1119 22:20:23.951381  244005 node_conditions.go:105] duration metric: took 2.792615ms to run NodePressure ...
	I1119 22:20:23.951394  244005 start.go:242] waiting for startup goroutines ...
	I1119 22:20:23.951400  244005 start.go:247] waiting for cluster config update ...
	I1119 22:20:23.951411  244005 start.go:256] writing updated cluster config ...
	I1119 22:20:23.951671  244005 ssh_runner.go:195] Run: rm -f paused
	I1119 22:20:23.955724  244005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:23.960403  244005 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.964724  244005 pod_ready.go:94] pod "coredns-5dd5756b68-8hdh7" is "Ready"
	I1119 22:20:23.964745  244005 pod_ready.go:86] duration metric: took 4.323941ms for pod "coredns-5dd5756b68-8hdh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.969212  244005 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.977143  244005 pod_ready.go:94] pod "etcd-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.977172  244005 pod_ready.go:86] duration metric: took 7.932702ms for pod "etcd-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.984279  244005 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.990403  244005 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-975700" is "Ready"
	I1119 22:20:23.990436  244005 pod_ready.go:86] duration metric: took 6.116437ms for pod "kube-apiserver-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:23.994759  244005 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.360199  244005 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-975700" is "Ready"
	I1119 22:20:24.360227  244005 pod_ready.go:86] duration metric: took 365.436099ms for pod "kube-controller-manager-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.562023  244005 pod_ready.go:83] waiting for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:24.960397  244005 pod_ready.go:94] pod "kube-proxy-rnxxf" is "Ready"
	I1119 22:20:24.960428  244005 pod_ready.go:86] duration metric: took 398.37739ms for pod "kube-proxy-rnxxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.161533  244005 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.560960  244005 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-975700" is "Ready"
	I1119 22:20:25.560992  244005 pod_ready.go:86] duration metric: took 399.43384ms for pod "kube-scheduler-old-k8s-version-975700" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:25.561003  244005 pod_ready.go:40] duration metric: took 1.605243985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:25.605359  244005 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:20:25.607589  244005 out.go:203] 
	W1119 22:20:25.608986  244005 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:20:25.610519  244005 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:20:25.612224  244005 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-975700" cluster and "default" namespace by default
	I1119 22:20:25.028260  248121 addons.go:515] duration metric: took 488.871855ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:20:25.340186  248121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-638439" context rescaled to 1 replicas
	W1119 22:20:26.840695  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:28.841182  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:26.041396  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42420->192.168.76.2:8443: read: connection reset by peer
	I1119 22:20:26.041468  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:26.041590  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:26.074121  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.074147  216336 cri.go:89] found id: "0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.074156  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.074161  216336 cri.go:89] found id: ""
	I1119 22:20:26.074169  216336 logs.go:282] 3 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:26.074227  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.080252  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.086170  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.090514  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:26.090588  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:26.119338  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.119365  216336 cri.go:89] found id: ""
	I1119 22:20:26.119375  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:26.119431  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.123237  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:26.123308  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:26.150429  216336 cri.go:89] found id: ""
	I1119 22:20:26.150465  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.150475  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:26.150488  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:26.150553  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:26.180127  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.180150  216336 cri.go:89] found id: ""
	I1119 22:20:26.180167  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:26.180222  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.185074  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:26.185141  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:26.216334  216336 cri.go:89] found id: ""
	I1119 22:20:26.216362  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.216373  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:26.216381  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:26.216440  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:26.246928  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:26.246952  216336 cri.go:89] found id: ""
	I1119 22:20:26.246962  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:26.247027  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:26.252210  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:26.252281  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:26.283008  216336 cri.go:89] found id: ""
	I1119 22:20:26.283052  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.283086  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:26.283101  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:26.283160  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:26.311983  216336 cri.go:89] found id: ""
	I1119 22:20:26.312016  216336 logs.go:282] 0 containers: []
	W1119 22:20:26.312026  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:26.312040  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:26.312059  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:26.372080  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:26.372108  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:26.372123  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:26.410125  216336 logs.go:123] Gathering logs for kube-apiserver [0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0] ...
	I1119 22:20:26.410156  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0de7a80fd8d36adf98c40ede94a9bc05ff5a19ea1f7de9d22cfe4fab02ee04d0"
	I1119 22:20:26.445052  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:26.445081  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:26.488314  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:26.488348  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:26.519759  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:26.519786  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:26.607720  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:26.607753  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:26.622164  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:26.622196  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:26.658569  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:26.658598  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:26.690380  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:26.690410  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:26.723334  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:26.723368  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.254435  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:29.254927  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:29.254988  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:29.255050  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:29.281477  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.281503  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.281509  216336 cri.go:89] found id: ""
	I1119 22:20:29.281518  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:29.281576  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.285991  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.289786  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:29.289841  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:29.315177  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.315199  216336 cri.go:89] found id: ""
	I1119 22:20:29.315208  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:29.315264  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.319376  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:29.319444  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:29.346951  216336 cri.go:89] found id: ""
	I1119 22:20:29.346973  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.346980  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:29.346998  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:29.347043  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:29.374529  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.374549  216336 cri.go:89] found id: ""
	I1119 22:20:29.374556  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:29.374608  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.378833  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:29.378918  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:29.409418  216336 cri.go:89] found id: ""
	I1119 22:20:29.409456  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.409468  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:29.409476  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:29.409542  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:29.439747  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.439767  216336 cri.go:89] found id: ""
	I1119 22:20:29.439775  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:29.439832  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:29.443967  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:29.444041  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:29.469669  216336 cri.go:89] found id: ""
	I1119 22:20:29.469695  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.469705  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:29.469712  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:29.469769  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:29.496972  216336 cri.go:89] found id: ""
	I1119 22:20:29.497000  216336 logs.go:282] 0 containers: []
	W1119 22:20:29.497009  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:29.497026  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:29.497039  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:29.585833  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:29.585865  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:29.600450  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:29.600488  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:29.634599  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:29.634632  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:29.694751  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:29.694785  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:29.694799  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:29.728982  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:29.729009  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:29.762543  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:29.762572  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:29.794342  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:29.794374  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:29.828582  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:29.828610  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:29.874642  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:29.874672  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1119 22:20:31.341227  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:33.840869  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:32.406487  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:32.406952  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:32.407019  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:32.407075  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:32.436319  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:32.436348  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:32.436355  216336 cri.go:89] found id: ""
	I1119 22:20:32.436368  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:32.436424  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.440717  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.444717  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:32.444781  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:32.470631  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:32.470655  216336 cri.go:89] found id: ""
	I1119 22:20:32.470666  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:32.470725  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.474820  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:32.474893  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:32.504076  216336 cri.go:89] found id: ""
	I1119 22:20:32.504104  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.504115  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:32.504125  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:32.504185  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:32.533110  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:32.533135  216336 cri.go:89] found id: ""
	I1119 22:20:32.533143  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:32.533215  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.537455  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:32.537523  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:32.564625  216336 cri.go:89] found id: ""
	I1119 22:20:32.564647  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.564655  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:32.564661  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:32.564719  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:32.591414  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:32.591443  216336 cri.go:89] found id: ""
	I1119 22:20:32.591455  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:32.591535  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:32.595459  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:32.595529  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:32.621765  216336 cri.go:89] found id: ""
	I1119 22:20:32.621792  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.621801  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:32.621807  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:32.621862  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:32.647922  216336 cri.go:89] found id: ""
	I1119 22:20:32.647948  216336 logs.go:282] 0 containers: []
	W1119 22:20:32.647958  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:32.647978  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:32.648005  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:32.680718  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:32.680745  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:32.726055  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:32.726088  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:32.757760  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:32.757794  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:32.848763  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:32.848797  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:32.862591  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:32.862631  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:32.922769  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:32.922788  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:32.922800  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:32.956142  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:32.956171  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:32.991968  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:32.992001  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:33.026022  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:33.026050  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	W1119 22:20:35.841570  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	W1119 22:20:38.341654  248121 node_ready.go:57] node "no-preload-638439" has "Ready":"False" status (will retry)
	I1119 22:20:35.560282  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:35.560655  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:35.560709  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:35.560753  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:35.585910  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:35.585932  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:35.585936  216336 cri.go:89] found id: ""
	I1119 22:20:35.585943  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:35.585992  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:35.590055  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:35.593958  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:35.594034  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:35.620237  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:35.620260  216336 cri.go:89] found id: ""
	I1119 22:20:35.620269  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:35.620324  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:35.624840  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:35.624917  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:35.653998  216336 cri.go:89] found id: ""
	I1119 22:20:35.654026  216336 logs.go:282] 0 containers: []
	W1119 22:20:35.654038  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:35.654045  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:35.654106  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:35.682647  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:35.682672  216336 cri.go:89] found id: ""
	I1119 22:20:35.682681  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:35.682742  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:35.687066  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:35.687219  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:35.714953  216336 cri.go:89] found id: ""
	I1119 22:20:35.714994  216336 logs.go:282] 0 containers: []
	W1119 22:20:35.715005  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:35.715012  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:35.715067  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:35.744549  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:35.744574  216336 cri.go:89] found id: ""
	I1119 22:20:35.744584  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:35.744634  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:35.749171  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:35.749258  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:35.778244  216336 cri.go:89] found id: ""
	I1119 22:20:35.778275  216336 logs.go:282] 0 containers: []
	W1119 22:20:35.778286  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:35.778294  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:35.778354  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:35.806726  216336 cri.go:89] found id: ""
	I1119 22:20:35.806758  216336 logs.go:282] 0 containers: []
	W1119 22:20:35.806769  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:35.806787  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:35.806800  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:35.910924  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:35.910954  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:35.944419  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:35.944449  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:35.990028  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:35.990064  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:36.023049  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:36.023087  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:36.037454  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:36.037480  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:36.099405  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:36.099430  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:36.099446  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:36.133450  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:36.133480  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:36.168344  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:36.168370  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:36.204067  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:36.204100  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:38.737972  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:38.738386  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:38.738446  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:38.738506  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:38.768290  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:38.768313  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:38.768317  216336 cri.go:89] found id: ""
	I1119 22:20:38.768323  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:38.768368  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:38.772611  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:38.776822  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:38.776900  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:38.805050  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:38.805075  216336 cri.go:89] found id: ""
	I1119 22:20:38.805085  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:38.805135  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:38.809621  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:38.809684  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:38.838633  216336 cri.go:89] found id: ""
	I1119 22:20:38.838663  216336 logs.go:282] 0 containers: []
	W1119 22:20:38.838674  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:38.838682  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:38.838738  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:38.867615  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:38.867638  216336 cri.go:89] found id: ""
	I1119 22:20:38.867649  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:38.867706  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:38.871853  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:38.871957  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:38.900329  216336 cri.go:89] found id: ""
	I1119 22:20:38.900357  216336 logs.go:282] 0 containers: []
	W1119 22:20:38.900368  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:38.900376  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:38.900438  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:38.928662  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:38.928687  216336 cri.go:89] found id: ""
	I1119 22:20:38.928695  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:38.928759  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:38.933179  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:38.933261  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:38.960907  216336 cri.go:89] found id: ""
	I1119 22:20:38.960938  216336 logs.go:282] 0 containers: []
	W1119 22:20:38.960950  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:38.960959  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:38.961013  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:38.990033  216336 cri.go:89] found id: ""
	I1119 22:20:38.990062  216336 logs.go:282] 0 containers: []
	W1119 22:20:38.990073  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:38.990089  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:38.990101  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:39.034444  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:39.034473  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:39.128086  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:39.128122  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:39.161102  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:39.161133  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:39.198561  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:39.198589  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:39.232480  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:39.232515  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:39.264700  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:39.264726  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:39.280190  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:39.280270  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:39.344902  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:39.344925  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:39.344943  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:39.404052  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:39.404092  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:39.841663  248121 node_ready.go:49] node "no-preload-638439" is "Ready"
	I1119 22:20:39.841694  248121 node_ready.go:38] duration metric: took 15.003752614s for node "no-preload-638439" to be "Ready" ...
	I1119 22:20:39.841712  248121 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:20:39.841765  248121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:20:39.860454  248121 api_server.go:72] duration metric: took 15.321071517s to wait for apiserver process to appear ...
	I1119 22:20:39.860483  248121 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:20:39.860504  248121 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:20:39.865947  248121 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 22:20:39.866902  248121 api_server.go:141] control plane version: v1.34.1
	I1119 22:20:39.866929  248121 api_server.go:131] duration metric: took 6.438901ms to wait for apiserver health ...
	I1119 22:20:39.866939  248121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:20:39.870369  248121 system_pods.go:59] 8 kube-system pods found
	I1119 22:20:39.870405  248121 system_pods.go:61] "coredns-66bc5c9577-82hpr" [1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:39.870415  248121 system_pods.go:61] "etcd-no-preload-638439" [eaddff4f-8ba3-4306-afbc-ac79ec3553dc] Running
	I1119 22:20:39.870423  248121 system_pods.go:61] "kindnet-c88rf" [dbc3f590-a300-4682-8d67-eb512c60e790] Running
	I1119 22:20:39.870428  248121 system_pods.go:61] "kube-apiserver-no-preload-638439" [8c579b49-60df-47ca-99df-2d3d5eb9fb65] Running
	I1119 22:20:39.870434  248121 system_pods.go:61] "kube-controller-manager-no-preload-638439" [9e7eb2a9-052a-4ffb-ae46-124b5ba2bdf2] Running
	I1119 22:20:39.870439  248121 system_pods.go:61] "kube-proxy-qvdld" [c17a46d4-7b16-4b78-9678-12006f879013] Running
	I1119 22:20:39.870444  248121 system_pods.go:61] "kube-scheduler-no-preload-638439" [ea8c8f76-d9eb-45a0-ad5b-4c30e02446df] Running
	I1119 22:20:39.870450  248121 system_pods.go:61] "storage-provisioner" [0414837f-ea33-47da-b64c-cdb22e9f1040] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:39.870458  248121 system_pods.go:74] duration metric: took 3.511407ms to wait for pod list to return data ...
	I1119 22:20:39.870467  248121 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:20:39.873370  248121 default_sa.go:45] found service account: "default"
	I1119 22:20:39.873394  248121 default_sa.go:55] duration metric: took 2.919537ms for default service account to be created ...
	I1119 22:20:39.873406  248121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:20:39.880362  248121 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:39.880404  248121 system_pods.go:89] "coredns-66bc5c9577-82hpr" [1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:39.880413  248121 system_pods.go:89] "etcd-no-preload-638439" [eaddff4f-8ba3-4306-afbc-ac79ec3553dc] Running
	I1119 22:20:39.880422  248121 system_pods.go:89] "kindnet-c88rf" [dbc3f590-a300-4682-8d67-eb512c60e790] Running
	I1119 22:20:39.880429  248121 system_pods.go:89] "kube-apiserver-no-preload-638439" [8c579b49-60df-47ca-99df-2d3d5eb9fb65] Running
	I1119 22:20:39.880436  248121 system_pods.go:89] "kube-controller-manager-no-preload-638439" [9e7eb2a9-052a-4ffb-ae46-124b5ba2bdf2] Running
	I1119 22:20:39.880442  248121 system_pods.go:89] "kube-proxy-qvdld" [c17a46d4-7b16-4b78-9678-12006f879013] Running
	I1119 22:20:39.880449  248121 system_pods.go:89] "kube-scheduler-no-preload-638439" [ea8c8f76-d9eb-45a0-ad5b-4c30e02446df] Running
	I1119 22:20:39.880457  248121 system_pods.go:89] "storage-provisioner" [0414837f-ea33-47da-b64c-cdb22e9f1040] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:39.880483  248121 retry.go:31] will retry after 211.503753ms: missing components: kube-dns
	I1119 22:20:40.096185  248121 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:40.096218  248121 system_pods.go:89] "coredns-66bc5c9577-82hpr" [1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:40.096224  248121 system_pods.go:89] "etcd-no-preload-638439" [eaddff4f-8ba3-4306-afbc-ac79ec3553dc] Running
	I1119 22:20:40.096229  248121 system_pods.go:89] "kindnet-c88rf" [dbc3f590-a300-4682-8d67-eb512c60e790] Running
	I1119 22:20:40.096234  248121 system_pods.go:89] "kube-apiserver-no-preload-638439" [8c579b49-60df-47ca-99df-2d3d5eb9fb65] Running
	I1119 22:20:40.096239  248121 system_pods.go:89] "kube-controller-manager-no-preload-638439" [9e7eb2a9-052a-4ffb-ae46-124b5ba2bdf2] Running
	I1119 22:20:40.096242  248121 system_pods.go:89] "kube-proxy-qvdld" [c17a46d4-7b16-4b78-9678-12006f879013] Running
	I1119 22:20:40.096245  248121 system_pods.go:89] "kube-scheduler-no-preload-638439" [ea8c8f76-d9eb-45a0-ad5b-4c30e02446df] Running
	I1119 22:20:40.096250  248121 system_pods.go:89] "storage-provisioner" [0414837f-ea33-47da-b64c-cdb22e9f1040] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:40.096267  248121 retry.go:31] will retry after 236.556511ms: missing components: kube-dns
	I1119 22:20:40.337130  248121 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:40.337164  248121 system_pods.go:89] "coredns-66bc5c9577-82hpr" [1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:20:40.337170  248121 system_pods.go:89] "etcd-no-preload-638439" [eaddff4f-8ba3-4306-afbc-ac79ec3553dc] Running
	I1119 22:20:40.337176  248121 system_pods.go:89] "kindnet-c88rf" [dbc3f590-a300-4682-8d67-eb512c60e790] Running
	I1119 22:20:40.337180  248121 system_pods.go:89] "kube-apiserver-no-preload-638439" [8c579b49-60df-47ca-99df-2d3d5eb9fb65] Running
	I1119 22:20:40.337183  248121 system_pods.go:89] "kube-controller-manager-no-preload-638439" [9e7eb2a9-052a-4ffb-ae46-124b5ba2bdf2] Running
	I1119 22:20:40.337186  248121 system_pods.go:89] "kube-proxy-qvdld" [c17a46d4-7b16-4b78-9678-12006f879013] Running
	I1119 22:20:40.337189  248121 system_pods.go:89] "kube-scheduler-no-preload-638439" [ea8c8f76-d9eb-45a0-ad5b-4c30e02446df] Running
	I1119 22:20:40.337194  248121 system_pods.go:89] "storage-provisioner" [0414837f-ea33-47da-b64c-cdb22e9f1040] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:20:40.337210  248121 retry.go:31] will retry after 419.90739ms: missing components: kube-dns
	I1119 22:20:40.761573  248121 system_pods.go:86] 8 kube-system pods found
	I1119 22:20:40.761600  248121 system_pods.go:89] "coredns-66bc5c9577-82hpr" [1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a] Running
	I1119 22:20:40.761605  248121 system_pods.go:89] "etcd-no-preload-638439" [eaddff4f-8ba3-4306-afbc-ac79ec3553dc] Running
	I1119 22:20:40.761609  248121 system_pods.go:89] "kindnet-c88rf" [dbc3f590-a300-4682-8d67-eb512c60e790] Running
	I1119 22:20:40.761612  248121 system_pods.go:89] "kube-apiserver-no-preload-638439" [8c579b49-60df-47ca-99df-2d3d5eb9fb65] Running
	I1119 22:20:40.761616  248121 system_pods.go:89] "kube-controller-manager-no-preload-638439" [9e7eb2a9-052a-4ffb-ae46-124b5ba2bdf2] Running
	I1119 22:20:40.761619  248121 system_pods.go:89] "kube-proxy-qvdld" [c17a46d4-7b16-4b78-9678-12006f879013] Running
	I1119 22:20:40.761622  248121 system_pods.go:89] "kube-scheduler-no-preload-638439" [ea8c8f76-d9eb-45a0-ad5b-4c30e02446df] Running
	I1119 22:20:40.761625  248121 system_pods.go:89] "storage-provisioner" [0414837f-ea33-47da-b64c-cdb22e9f1040] Running
	I1119 22:20:40.761631  248121 system_pods.go:126] duration metric: took 888.2191ms to wait for k8s-apps to be running ...
	I1119 22:20:40.761639  248121 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:20:40.761680  248121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:20:40.774875  248121 system_svc.go:56] duration metric: took 13.227454ms WaitForService to wait for kubelet
	I1119 22:20:40.774916  248121 kubeadm.go:587] duration metric: took 16.235539934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:20:40.774957  248121 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:20:40.777534  248121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:20:40.777559  248121 node_conditions.go:123] node cpu capacity is 8
	I1119 22:20:40.777572  248121 node_conditions.go:105] duration metric: took 2.610463ms to run NodePressure ...
	I1119 22:20:40.777583  248121 start.go:242] waiting for startup goroutines ...
	I1119 22:20:40.777590  248121 start.go:247] waiting for cluster config update ...
	I1119 22:20:40.777600  248121 start.go:256] writing updated cluster config ...
	I1119 22:20:40.777839  248121 ssh_runner.go:195] Run: rm -f paused
	I1119 22:20:40.781955  248121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:40.785261  248121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-82hpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.789352  248121 pod_ready.go:94] pod "coredns-66bc5c9577-82hpr" is "Ready"
	I1119 22:20:40.789374  248121 pod_ready.go:86] duration metric: took 4.091067ms for pod "coredns-66bc5c9577-82hpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.791288  248121 pod_ready.go:83] waiting for pod "etcd-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.795303  248121 pod_ready.go:94] pod "etcd-no-preload-638439" is "Ready"
	I1119 22:20:40.795328  248121 pod_ready.go:86] duration metric: took 4.019701ms for pod "etcd-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.797311  248121 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.801059  248121 pod_ready.go:94] pod "kube-apiserver-no-preload-638439" is "Ready"
	I1119 22:20:40.801078  248121 pod_ready.go:86] duration metric: took 3.745196ms for pod "kube-apiserver-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:40.803100  248121 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:41.186534  248121 pod_ready.go:94] pod "kube-controller-manager-no-preload-638439" is "Ready"
	I1119 22:20:41.186564  248121 pod_ready.go:86] duration metric: took 383.443003ms for pod "kube-controller-manager-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:41.386658  248121 pod_ready.go:83] waiting for pod "kube-proxy-qvdld" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:41.785853  248121 pod_ready.go:94] pod "kube-proxy-qvdld" is "Ready"
	I1119 22:20:41.785910  248121 pod_ready.go:86] duration metric: took 399.227918ms for pod "kube-proxy-qvdld" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:41.986326  248121 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:42.386440  248121 pod_ready.go:94] pod "kube-scheduler-no-preload-638439" is "Ready"
	I1119 22:20:42.386472  248121 pod_ready.go:86] duration metric: took 400.118228ms for pod "kube-scheduler-no-preload-638439" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:20:42.386487  248121 pod_ready.go:40] duration metric: took 1.604506562s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:20:42.433830  248121 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:20:42.435791  248121 out.go:179] * Done! kubectl is now configured to use "no-preload-638439" cluster and "default" namespace by default
	I1119 22:20:41.944555  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:41.945057  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:41.945117  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:41.945184  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:41.973384  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:41.973404  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:41.973408  216336 cri.go:89] found id: ""
	I1119 22:20:41.973415  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:41.973461  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:41.977856  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:41.981892  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:41.981959  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:42.008033  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:42.008051  216336 cri.go:89] found id: ""
	I1119 22:20:42.008058  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:42.008104  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:42.012047  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:42.012117  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:42.039564  216336 cri.go:89] found id: ""
	I1119 22:20:42.039585  216336 logs.go:282] 0 containers: []
	W1119 22:20:42.039592  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:42.039598  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:42.039640  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:42.066025  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:42.066048  216336 cri.go:89] found id: ""
	I1119 22:20:42.066055  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:42.066100  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:42.070272  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:42.070339  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:42.098010  216336 cri.go:89] found id: ""
	I1119 22:20:42.098040  216336 logs.go:282] 0 containers: []
	W1119 22:20:42.098051  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:42.098059  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:42.098113  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:42.123869  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:42.123906  216336 cri.go:89] found id: ""
	I1119 22:20:42.123917  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:42.124025  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:42.128062  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:42.128130  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:42.153671  216336 cri.go:89] found id: ""
	I1119 22:20:42.153698  216336 logs.go:282] 0 containers: []
	W1119 22:20:42.153709  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:42.153716  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:42.153780  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:42.180132  216336 cri.go:89] found id: ""
	I1119 22:20:42.180161  216336 logs.go:282] 0 containers: []
	W1119 22:20:42.180170  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:42.180182  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:42.180196  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:42.194332  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:42.194358  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:42.251447  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:42.251468  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:42.251483  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:42.294627  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:42.294656  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:42.392676  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:42.392717  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:42.430196  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:42.430237  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:42.467848  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:42.467916  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:42.503366  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:42.503406  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:42.541249  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:42.541286  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:42.576527  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:42.576567  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:45.116162  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:45.116676  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:45.116726  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:45.116771  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:45.143391  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:45.143413  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:45.143417  216336 cri.go:89] found id: ""
	I1119 22:20:45.143424  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:45.143481  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:45.147496  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:45.151645  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:45.151715  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:45.179010  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:45.179040  216336 cri.go:89] found id: ""
	I1119 22:20:45.179050  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:45.179103  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:45.183043  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:45.183113  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:45.209157  216336 cri.go:89] found id: ""
	I1119 22:20:45.209188  216336 logs.go:282] 0 containers: []
	W1119 22:20:45.209197  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:45.209204  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:45.209267  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:45.235248  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:45.235273  216336 cri.go:89] found id: ""
	I1119 22:20:45.235280  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:45.235327  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:45.239289  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:45.239354  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:45.266111  216336 cri.go:89] found id: ""
	I1119 22:20:45.266138  216336 logs.go:282] 0 containers: []
	W1119 22:20:45.266148  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:45.266156  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:45.266215  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:45.292378  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:45.292402  216336 cri.go:89] found id: ""
	I1119 22:20:45.292412  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:45.292468  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:45.296533  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:45.296597  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:45.321930  216336 cri.go:89] found id: ""
	I1119 22:20:45.321956  216336 logs.go:282] 0 containers: []
	W1119 22:20:45.321964  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:45.321970  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:45.322016  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:45.349056  216336 cri.go:89] found id: ""
	I1119 22:20:45.349077  216336 logs.go:282] 0 containers: []
	W1119 22:20:45.349086  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:45.349100  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:45.349110  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:20:45.362497  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:45.362528  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:45.394351  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:45.394381  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:45.430285  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:45.430314  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:45.463818  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:45.463841  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:45.554734  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:45.554767  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:45.613286  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:45.613305  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:45.613319  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:45.646165  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:45.646195  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:45.678849  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:45.678877  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:45.711833  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:45.711864  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:48.253137  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:20:48.253591  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:20:48.253653  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:20:48.253715  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:20:48.280996  216336 cri.go:89] found id: "7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:48.281030  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:48.281035  216336 cri.go:89] found id: ""
	I1119 22:20:48.281043  216336 logs.go:282] 2 containers: [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:20:48.281106  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:48.285230  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:48.289363  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:20:48.289436  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:20:48.316988  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:48.317017  216336 cri.go:89] found id: ""
	I1119 22:20:48.317025  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:20:48.317074  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:48.321244  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:20:48.321309  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:20:48.347638  216336 cri.go:89] found id: ""
	I1119 22:20:48.347660  216336 logs.go:282] 0 containers: []
	W1119 22:20:48.347670  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:20:48.347677  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:20:48.347733  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:20:48.376138  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:48.376159  216336 cri.go:89] found id: ""
	I1119 22:20:48.376167  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:20:48.376224  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:48.380497  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:20:48.380570  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:20:48.407988  216336 cri.go:89] found id: ""
	I1119 22:20:48.408015  216336 logs.go:282] 0 containers: []
	W1119 22:20:48.408026  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:20:48.408035  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:20:48.408088  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:20:48.435356  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:48.435377  216336 cri.go:89] found id: ""
	I1119 22:20:48.435385  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:20:48.435432  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:20:48.439573  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:20:48.439635  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:20:48.467054  216336 cri.go:89] found id: ""
	I1119 22:20:48.467076  216336 logs.go:282] 0 containers: []
	W1119 22:20:48.467084  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:20:48.467089  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:20:48.467135  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:20:48.496591  216336 cri.go:89] found id: ""
	I1119 22:20:48.496614  216336 logs.go:282] 0 containers: []
	W1119 22:20:48.496621  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:20:48.496636  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:20:48.496648  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:20:48.556573  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:20:48.556590  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:20:48.556602  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:20:48.592251  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:20:48.592283  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:20:48.633384  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:20:48.633420  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:20:48.724156  216336 logs.go:123] Gathering logs for kube-apiserver [7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7] ...
	I1119 22:20:48.724196  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ad4b982b6755076027d5e5a0dbbc765f8f8a005fc34051b36948b140a060ce7"
	I1119 22:20:48.757346  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:20:48.757377  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:20:48.791208  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:20:48.791245  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:20:48.826852  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:20:48.826898  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:20:48.860710  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:20:48.860741  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:20:48.890702  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:20:48.890730  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9d98e3b1e5a67       56cc512116c8f       6 seconds ago       Running             busybox                   0                   531292f348e86       busybox                                     default
	2171565edf3d7       52546a367cc9e       11 seconds ago      Running             coredns                   0                   1c79324037c76       coredns-66bc5c9577-82hpr                    kube-system
	d8860258a82e4       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   de2fc5aa05444       storage-provisioner                         kube-system
	683ab4246e75d       409467f978b4a       22 seconds ago      Running             kindnet-cni               0                   6df9bfcc4a890       kindnet-c88rf                               kube-system
	dcbd92b705b37       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   5644699eb9d2b       kube-proxy-qvdld                            kube-system
	7e1261c5393eb       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   4aa24c3fbb529       kube-controller-manager-no-preload-638439   kube-system
	e6425f304360e       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   082207d8d21e3       kube-apiserver-no-preload-638439            kube-system
	405eabebdf22d       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   4d64cba50742f       etcd-no-preload-638439                      kube-system
	159fa612c2cd4       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   d1406c1225272       kube-scheduler-no-preload-638439            kube-system
	
	
	==> containerd <==
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.835103665Z" level=info msg="Container 2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.838601245Z" level=info msg="CreateContainer within sandbox \"de2fc5aa054444d86e05db7e22e6469af009c84bcdb2d87493ee537ef2db2b1b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.839302183Z" level=info msg="StartContainer for \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.840661082Z" level=info msg="connecting to shim d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91" address="unix:///run/containerd/s/61149daaf323c0f190c059eb6e5d17f4a89bffe3f10cc9153695033979afdd69" protocol=ttrpc version=3
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.844167970Z" level=info msg="CreateContainer within sandbox \"1c79324037c7646a24733a1633d317cad34f987fd4e2a2427f09d8b1b665386f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.844841182Z" level=info msg="StartContainer for \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.846176402Z" level=info msg="connecting to shim 2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21" address="unix:///run/containerd/s/be9a778b46ba0f762e0ea6f46004071a696b849b492428ac00466289f21516e4" protocol=ttrpc version=3
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.912268545Z" level=info msg="StartContainer for \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\" returns successfully"
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.925510939Z" level=info msg="StartContainer for \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\" returns successfully"
	Nov 19 22:20:42 no-preload-638439 containerd[666]: time="2025-11-19T22:20:42.920730428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7de716fc-5cc0-401e-af15-e754abb3f8ee,Namespace:default,Attempt:0,}"
	Nov 19 22:20:42 no-preload-638439 containerd[666]: time="2025-11-19T22:20:42.965127294Z" level=info msg="connecting to shim 531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9" address="unix:///run/containerd/s/a24e3303b353d012017377837607fe3c4f29d44aab3a08f5b3c6733f210993ab" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:20:43 no-preload-638439 containerd[666]: time="2025-11-19T22:20:43.033780144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7de716fc-5cc0-401e-af15-e754abb3f8ee,Namespace:default,Attempt:0,} returns sandbox id \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\""
	Nov 19 22:20:43 no-preload-638439 containerd[666]: time="2025-11-19T22:20:43.035432060Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.015636690Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.016541511Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.017641226Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.019742257Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.020278554Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.984803694s"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.020311259Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.024123911Z" level=info msg="CreateContainer within sandbox \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.032566245Z" level=info msg="Container 9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.038725144Z" level=info msg="CreateContainer within sandbox \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.039383059Z" level=info msg="StartContainer for \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.040390655Z" level=info msg="connecting to shim 9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577" address="unix:///run/containerd/s/a24e3303b353d012017377837607fe3c4f29d44aab3a08f5b3c6733f210993ab" protocol=ttrpc version=3
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.088908611Z" level=info msg="StartContainer for \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\" returns successfully"
	
	
	==> coredns [2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34304 - 63937 "HINFO IN 145528421484830345.5922166076607501534. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.068700541s
	
	
	==> describe nodes <==
	Name:               no-preload-638439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-638439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-638439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_20_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:20:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-638439
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:20:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-638439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                cb9234f4-7a8c-4f18-a926-993410815873
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-82hpr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-638439                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-c88rf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-638439             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-638439    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-qvdld                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-638439             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-638439 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-638439 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-638439 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-638439 event: Registered Node no-preload-638439 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-638439 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [405eabebdf22dd831aa00ab4e3ee15e53537277965c0d15fd4a3ac187f178b0b] <==
	{"level":"warn","ts":"2025-11-19T22:20:16.263043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.271195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.287760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.295063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.302506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.310139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.319907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.327213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.334532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.341234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.347595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.355083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.362330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.369590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.376775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.382876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.390055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.397299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.405353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.413417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.431212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.437805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.443997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.487667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:52 up  1:03,  0 user,  load average: 4.21, 3.38, 2.13
	Linux no-preload-638439 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [683ab4246e75dc17f6d6bbce97bc19c4413c8de5876941ef071541e73fb083f6] <==
	I1119 22:20:29.118862       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:20:29.119179       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:20:29.119388       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:20:29.119414       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:20:29.119445       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:20:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:20:29.321409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:20:29.321663       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:20:29.321681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:20:29.321861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:20:29.682098       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:20:29.682124       1 metrics.go:72] Registering metrics
	I1119 22:20:29.682165       1 controller.go:711] "Syncing nftables rules"
	I1119 22:20:39.325078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:20:39.325133       1 main.go:301] handling current node
	I1119 22:20:49.322243       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:20:49.322276       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6425f304360e6c945a35502eef794042bec98774dea1b696a51a81f0238d5c0] <==
	I1119 22:20:16.974676       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:20:16.974695       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:20:16.974703       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:20:16.974710       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:20:16.975225       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:20:17.158181       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:20:17.870006       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:20:17.873802       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:20:17.873822       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:20:18.342148       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:20:18.379938       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:20:18.473581       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:20:18.479348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:20:18.480586       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:20:18.485486       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:20:18.886304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:20:19.525252       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:20:19.535987       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:20:19.543383       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:20:24.641354       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:20:24.689145       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:24.689146       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:24.989418       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:20:24.993037       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:20:50.717785       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:53648: use of closed network connection
	
	
	==> kube-controller-manager [7e1261c5393eb9b047aef79cc833db37d8b348e2e6fba9c14452088cfc66fcdb] <==
	I1119 22:20:23.885928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:20:23.885940       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:20:23.885948       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:20:23.886049       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-638439"
	I1119 22:20:23.886087       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:20:23.886141       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:20:23.886209       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:20:23.886246       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:20:23.886387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:20:23.886454       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:20:23.886661       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:20:23.887071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:20:23.887158       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:20:23.887243       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:20:23.888246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:20:23.888350       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:20:23.889415       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:20:23.890812       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:20:23.890806       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:20:23.892005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:20:23.897278       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:20:23.903463       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:20:23.908770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:20:23.912139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:20:43.888011       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dcbd92b705b37e35dd2979e89cb6160c3c85860c5fdec20b514148132a315d78] <==
	I1119 22:20:25.884756       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:20:25.942685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:20:26.043521       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:20:26.043565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:20:26.043676       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:20:26.067145       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:20:26.067284       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:20:26.074679       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:20:26.075185       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:20:26.075236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:20:26.077549       1 config.go:309] "Starting node config controller"
	I1119 22:20:26.077573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:20:26.078974       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:20:26.079007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:20:26.079196       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:20:26.079205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:20:26.079202       1 config.go:200] "Starting service config controller"
	I1119 22:20:26.079241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:20:26.178275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:20:26.179483       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:20:26.179507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:20:26.179518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [159fa612c2cd44cba268f356b3fc242510cdd5755545e3e7616335a46b35eb21] <==
	E1119 22:20:16.916927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:20:16.917081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:20:16.916976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:20:16.917108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:20:16.917101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:20:16.917298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:20:16.917436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:20:16.917436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:20:16.917533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:20:16.917629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:20:16.917669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:20:16.917732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:20:16.917579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:20:16.918257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:20:17.725134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:20:17.775472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:20:17.775472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:20:17.807221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:20:17.827803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:20:17.861330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:20:18.070332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:20:18.081282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:20:18.107459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:20:18.134818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1119 22:20:18.512856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:20:23 no-preload-638439 kubelet[2185]: I1119 22:20:23.902623    2185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783710    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c17a46d4-7b16-4b78-9678-12006f879013-xtables-lock\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783752    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-xtables-lock\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783774    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c17a46d4-7b16-4b78-9678-12006f879013-lib-modules\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783788    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-cni-cfg\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783803    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27hz\" (UniqueName: \"kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783828    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c17a46d4-7b16-4b78-9678-12006f879013-kube-proxy\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783981    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmcs\" (UniqueName: \"kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.784059    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-lib-modules\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891494    2185 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891535    2185 projected.go:196] Error preparing data for projected volume kube-api-access-2hmcs for pod kube-system/kube-proxy-qvdld: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891626    2185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs podName:c17a46d4-7b16-4b78-9678-12006f879013 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:25.391595822 +0000 UTC m=+6.117533845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2hmcs" (UniqueName: "kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs") pod "kube-proxy-qvdld" (UID: "c17a46d4-7b16-4b78-9678-12006f879013") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893717    2185 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893744    2185 projected.go:196] Error preparing data for projected volume kube-api-access-x27hz for pod kube-system/kindnet-c88rf: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893815    2185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz podName:dbc3f590-a300-4682-8d67-eb512c60e790 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:25.393782921 +0000 UTC m=+6.119720943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x27hz" (UniqueName: "kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz") pod "kindnet-c88rf" (UID: "dbc3f590-a300-4682-8d67-eb512c60e790") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:26 no-preload-638439 kubelet[2185]: I1119 22:20:26.409477    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvdld" podStartSLOduration=2.409454745 podStartE2EDuration="2.409454745s" podCreationTimestamp="2025-11-19 22:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:26.409252865 +0000 UTC m=+7.135190902" watchObservedRunningTime="2025-11-19 22:20:26.409454745 +0000 UTC m=+7.135392788"
	Nov 19 22:20:29 no-preload-638439 kubelet[2185]: I1119 22:20:29.471871    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c88rf" podStartSLOduration=2.605197213 podStartE2EDuration="5.471851095s" podCreationTimestamp="2025-11-19 22:20:24 +0000 UTC" firstStartedPulling="2025-11-19 22:20:25.936462764 +0000 UTC m=+6.662400777" lastFinishedPulling="2025-11-19 22:20:28.803116645 +0000 UTC m=+9.529054659" observedRunningTime="2025-11-19 22:20:29.409950648 +0000 UTC m=+10.135888679" watchObservedRunningTime="2025-11-19 22:20:29.471851095 +0000 UTC m=+10.197789127"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.347033    2185 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488416    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a-config-volume\") pod \"coredns-66bc5c9577-82hpr\" (UID: \"1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a\") " pod="kube-system/coredns-66bc5c9577-82hpr"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488709    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm88r\" (UniqueName: \"kubernetes.io/projected/1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a-kube-api-access-nm88r\") pod \"coredns-66bc5c9577-82hpr\" (UID: \"1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a\") " pod="kube-system/coredns-66bc5c9577-82hpr"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488789    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0414837f-ea33-47da-b64c-cdb22e9f1040-tmp\") pod \"storage-provisioner\" (UID: \"0414837f-ea33-47da-b64c-cdb22e9f1040\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488810    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxkh\" (UniqueName: \"kubernetes.io/projected/0414837f-ea33-47da-b64c-cdb22e9f1040-kube-api-access-lgxkh\") pod \"storage-provisioner\" (UID: \"0414837f-ea33-47da-b64c-cdb22e9f1040\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:40 no-preload-638439 kubelet[2185]: I1119 22:20:40.436087    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-82hpr" podStartSLOduration=15.436064241 podStartE2EDuration="15.436064241s" podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:40.435825083 +0000 UTC m=+21.161763113" watchObservedRunningTime="2025-11-19 22:20:40.436064241 +0000 UTC m=+21.162002273"
	Nov 19 22:20:40 no-preload-638439 kubelet[2185]: I1119 22:20:40.456073    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.456052647 podStartE2EDuration="15.456052647s" podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:40.446030606 +0000 UTC m=+21.171968636" watchObservedRunningTime="2025-11-19 22:20:40.456052647 +0000 UTC m=+21.181990679"
	Nov 19 22:20:42 no-preload-638439 kubelet[2185]: I1119 22:20:42.707092    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsksb\" (UniqueName: \"kubernetes.io/projected/7de716fc-5cc0-401e-af15-e754abb3f8ee-kube-api-access-xsksb\") pod \"busybox\" (UID: \"7de716fc-5cc0-401e-af15-e754abb3f8ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91] <==
	I1119 22:20:39.918121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:20:39.928301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:20:39.928385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:20:39.931595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:39.939073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:20:39.939256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:20:39.939410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea0f5240-138c-44d8-830f-af4064436d86", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f became leader
	I1119 22:20:39.939439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f!
	W1119 22:20:39.943065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:39.951033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:20:40.039590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f!
	W1119 22:20:41.956172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:41.961423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:43.964285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:43.969049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:45.971705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:45.976626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:47.979993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:47.984149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:49.987200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:49.992727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:51.997100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:52.001077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-638439 -n no-preload-638439
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-638439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-638439
helpers_test.go:243: (dbg) docker inspect no-preload-638439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f",
	        "Created": "2025-11-19T22:19:50.066386297Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:19:50.106148209Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/hosts",
	        "LogPath": "/var/lib/docker/containers/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f/4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f-json.log",
	        "Name": "/no-preload-638439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-638439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-638439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ff4bb9a387d8fad035c97ac1b287af406bd01ea1bd851631d39a79ee3cf699f",
	                "LowerDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abfe6f4b627d53602a4852aa11b97ff39ca3345dd9cdd11aaaa601dd42361499/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-638439",
	                "Source": "/var/lib/docker/volumes/no-preload-638439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-638439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-638439",
	                "name.minikube.sigs.k8s.io": "no-preload-638439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6468eaa6e58815ec1a8ce4eef75bd1d1183671d7d0f0969ca0d0d7197bcd337c",
	            "SandboxKey": "/var/run/docker/netns/6468eaa6e588",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-638439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "704e557aea6b4b1b4f015ac501c682b6edea96a04c5ccb3e1b740fcfc4233bcd",
	                    "EndpointID": "725c5e37d39d816ed8bc0698b36833d09a0bbafb80fc6b01e76122045fed421c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:8a:6e:8d:e0:e9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-638439",
	                        "4ff4bb9a387d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-638439 -n no-preload-638439
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-638439 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-904997 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo containerd config dump                                                                                                                                                                                                        │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ ssh     │ -p cilium-904997 sudo crio config                                                                                                                                                                                                                   │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │                     │
	│ delete  │ -p cilium-904997                                                                                                                                                                                                                                    │ cilium-904997             │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:18 UTC │
	│ start   │ -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:18 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ ssh     │ force-systemd-flag-635885 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p force-systemd-flag-635885                                                                                                                                                                                                                        │ force-systemd-flag-635885 │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ stop    │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p NoKubernetes-836292 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │                     │
	│ delete  │ -p NoKubernetes-836292                                                                                                                                                                                                                              │ NoKubernetes-836292       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ ssh     │ cert-options-071115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ ssh     │ -p cert-options-071115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ delete  │ -p cert-options-071115                                                                                                                                                                                                                              │ cert-options-071115       │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:19 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439         │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-975700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p old-k8s-version-975700 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-975700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700    │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:20:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:20:52.038114  259058 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:20:52.038465  259058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:20:52.038478  259058 out.go:374] Setting ErrFile to fd 2...
	I1119 22:20:52.038483  259058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:20:52.038697  259058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:20:52.039156  259058 out.go:368] Setting JSON to false
	I1119 22:20:52.040431  259058 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3792,"bootTime":1763587060,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:20:52.040555  259058 start.go:143] virtualization: kvm guest
	I1119 22:20:52.042530  259058 out.go:179] * [old-k8s-version-975700] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:20:52.044001  259058 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:20:52.044012  259058 notify.go:221] Checking for updates...
	I1119 22:20:52.045158  259058 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:20:52.046915  259058 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:20:52.048364  259058 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:20:52.049794  259058 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:20:52.052096  259058 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:20:52.055625  259058 config.go:182] Loaded profile config "old-k8s-version-975700": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:20:52.057813  259058 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 22:20:52.059125  259058 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:20:52.090429  259058 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:20:52.090528  259058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:20:52.158279  259058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:20:52.145826914 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:20:52.158449  259058 docker.go:319] overlay module found
	I1119 22:20:52.160727  259058 out.go:179] * Using the docker driver based on existing profile
	I1119 22:20:52.162017  259058 start.go:309] selected driver: docker
	I1119 22:20:52.162035  259058 start.go:930] validating driver "docker" against &{Name:old-k8s-version-975700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-975700 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:20:52.162146  259058 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:20:52.162714  259058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:20:52.228681  259058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:20:52.217625111 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:20:52.228958  259058 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:20:52.228988  259058 cni.go:84] Creating CNI manager for ""
	I1119 22:20:52.229041  259058 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:20:52.229096  259058 start.go:353] cluster config:
	{Name:old-k8s-version-975700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-975700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:20:52.230982  259058 out.go:179] * Starting "old-k8s-version-975700" primary control-plane node in "old-k8s-version-975700" cluster
	I1119 22:20:52.232142  259058 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:20:52.233556  259058 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:20:52.234811  259058 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:20:52.234847  259058 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 22:20:52.234854  259058 cache.go:65] Caching tarball of preloaded images
	I1119 22:20:52.234940  259058 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:20:52.234988  259058 preload.go:238] Found /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 22:20:52.235004  259058 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1119 22:20:52.235117  259058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/old-k8s-version-975700/config.json ...
	I1119 22:20:52.259219  259058 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:20:52.259240  259058 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:20:52.259255  259058 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:20:52.259279  259058 start.go:360] acquireMachinesLock for old-k8s-version-975700: {Name:mka52c69b29c93c8c096280bb309407c44f531b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:20:52.259340  259058 start.go:364] duration metric: took 35.883µs to acquireMachinesLock for "old-k8s-version-975700"
	I1119 22:20:52.259357  259058 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:20:52.259364  259058 fix.go:54] fixHost starting: 
	I1119 22:20:52.259567  259058 cli_runner.go:164] Run: docker container inspect old-k8s-version-975700 --format={{.State.Status}}
	I1119 22:20:52.278120  259058 fix.go:112] recreateIfNeeded on old-k8s-version-975700: state=Stopped err=<nil>
	W1119 22:20:52.278149  259058 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9d98e3b1e5a67       56cc512116c8f       8 seconds ago       Running             busybox                   0                   531292f348e86       busybox                                     default
	2171565edf3d7       52546a367cc9e       13 seconds ago      Running             coredns                   0                   1c79324037c76       coredns-66bc5c9577-82hpr                    kube-system
	d8860258a82e4       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   de2fc5aa05444       storage-provisioner                         kube-system
	683ab4246e75d       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   6df9bfcc4a890       kindnet-c88rf                               kube-system
	dcbd92b705b37       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   5644699eb9d2b       kube-proxy-qvdld                            kube-system
	7e1261c5393eb       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   4aa24c3fbb529       kube-controller-manager-no-preload-638439   kube-system
	e6425f304360e       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   082207d8d21e3       kube-apiserver-no-preload-638439            kube-system
	405eabebdf22d       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   4d64cba50742f       etcd-no-preload-638439                      kube-system
	159fa612c2cd4       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   d1406c1225272       kube-scheduler-no-preload-638439            kube-system
	
	
	==> containerd <==
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.835103665Z" level=info msg="Container 2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.838601245Z" level=info msg="CreateContainer within sandbox \"de2fc5aa054444d86e05db7e22e6469af009c84bcdb2d87493ee537ef2db2b1b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.839302183Z" level=info msg="StartContainer for \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.840661082Z" level=info msg="connecting to shim d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91" address="unix:///run/containerd/s/61149daaf323c0f190c059eb6e5d17f4a89bffe3f10cc9153695033979afdd69" protocol=ttrpc version=3
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.844167970Z" level=info msg="CreateContainer within sandbox \"1c79324037c7646a24733a1633d317cad34f987fd4e2a2427f09d8b1b665386f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.844841182Z" level=info msg="StartContainer for \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\""
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.846176402Z" level=info msg="connecting to shim 2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21" address="unix:///run/containerd/s/be9a778b46ba0f762e0ea6f46004071a696b849b492428ac00466289f21516e4" protocol=ttrpc version=3
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.912268545Z" level=info msg="StartContainer for \"d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91\" returns successfully"
	Nov 19 22:20:39 no-preload-638439 containerd[666]: time="2025-11-19T22:20:39.925510939Z" level=info msg="StartContainer for \"2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21\" returns successfully"
	Nov 19 22:20:42 no-preload-638439 containerd[666]: time="2025-11-19T22:20:42.920730428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7de716fc-5cc0-401e-af15-e754abb3f8ee,Namespace:default,Attempt:0,}"
	Nov 19 22:20:42 no-preload-638439 containerd[666]: time="2025-11-19T22:20:42.965127294Z" level=info msg="connecting to shim 531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9" address="unix:///run/containerd/s/a24e3303b353d012017377837607fe3c4f29d44aab3a08f5b3c6733f210993ab" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:20:43 no-preload-638439 containerd[666]: time="2025-11-19T22:20:43.033780144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7de716fc-5cc0-401e-af15-e754abb3f8ee,Namespace:default,Attempt:0,} returns sandbox id \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\""
	Nov 19 22:20:43 no-preload-638439 containerd[666]: time="2025-11-19T22:20:43.035432060Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.015636690Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.016541511Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.017641226Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.019742257Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.020278554Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 1.984803694s"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.020311259Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.024123911Z" level=info msg="CreateContainer within sandbox \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.032566245Z" level=info msg="Container 9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.038725144Z" level=info msg="CreateContainer within sandbox \"531292f348e868c8f9bc938787188a39dcbc6ffa0a8446d934e613dacfc716f9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.039383059Z" level=info msg="StartContainer for \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\""
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.040390655Z" level=info msg="connecting to shim 9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577" address="unix:///run/containerd/s/a24e3303b353d012017377837607fe3c4f29d44aab3a08f5b3c6733f210993ab" protocol=ttrpc version=3
	Nov 19 22:20:45 no-preload-638439 containerd[666]: time="2025-11-19T22:20:45.088908611Z" level=info msg="StartContainer for \"9d98e3b1e5a67590577452ae69a37a3a2460fa0beda90eb8371cb888e44e5577\" returns successfully"
	
	
	==> coredns [2171565edf3d7efdf300f7e333349f8ff69e3b29100fb2b17a3b661f00c5ec21] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34304 - 63937 "HINFO IN 145528421484830345.5922166076607501534. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.068700541s
	
	
	==> describe nodes <==
	Name:               no-preload-638439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-638439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-638439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_20_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:20:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-638439
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:20:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:20:50 +0000   Wed, 19 Nov 2025 22:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-638439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                cb9234f4-7a8c-4f18-a926-993410815873
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-82hpr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-638439                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-c88rf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-638439             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-638439    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-qvdld                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-638439             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node no-preload-638439 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node no-preload-638439 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node no-preload-638439 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node no-preload-638439 event: Registered Node no-preload-638439 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-638439 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [405eabebdf22dd831aa00ab4e3ee15e53537277965c0d15fd4a3ac187f178b0b] <==
	{"level":"warn","ts":"2025-11-19T22:20:16.263043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.271195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.287760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.295063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.302506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.310139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.319907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.327213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.334532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.341234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.347595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.355083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.362330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.369590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.376775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.382876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.390055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.397299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.405353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.413417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.431212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.437805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.443997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:20:16.487667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:54 up  1:03,  0 user,  load average: 4.21, 3.38, 2.13
	Linux no-preload-638439 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [683ab4246e75dc17f6d6bbce97bc19c4413c8de5876941ef071541e73fb083f6] <==
	I1119 22:20:29.118862       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:20:29.119179       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:20:29.119388       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:20:29.119414       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:20:29.119445       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:20:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:20:29.321409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:20:29.321663       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:20:29.321681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:20:29.321861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:20:29.682098       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:20:29.682124       1 metrics.go:72] Registering metrics
	I1119 22:20:29.682165       1 controller.go:711] "Syncing nftables rules"
	I1119 22:20:39.325078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:20:39.325133       1 main.go:301] handling current node
	I1119 22:20:49.322243       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:20:49.322276       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6425f304360e6c945a35502eef794042bec98774dea1b696a51a81f0238d5c0] <==
	I1119 22:20:16.974676       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:20:16.974695       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:20:16.974703       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:20:16.974710       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:20:16.975225       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:20:17.158181       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:20:17.870006       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:20:17.873802       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:20:17.873822       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:20:18.342148       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:20:18.379938       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:20:18.473581       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:20:18.479348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:20:18.480586       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:20:18.485486       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:20:18.886304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:20:19.525252       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:20:19.535987       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:20:19.543383       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:20:24.641354       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:20:24.689145       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:24.689146       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:20:24.989418       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:20:24.993037       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:20:50.717785       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:53648: use of closed network connection
	
	
	==> kube-controller-manager [7e1261c5393eb9b047aef79cc833db37d8b348e2e6fba9c14452088cfc66fcdb] <==
	I1119 22:20:23.885928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:20:23.885940       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:20:23.885948       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:20:23.886049       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-638439"
	I1119 22:20:23.886087       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:20:23.886141       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:20:23.886209       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:20:23.886246       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:20:23.886387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:20:23.886454       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:20:23.886661       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:20:23.887071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:20:23.887158       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:20:23.887243       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:20:23.888246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:20:23.888350       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:20:23.889415       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:20:23.890812       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:20:23.890806       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:20:23.892005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:20:23.897278       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:20:23.903463       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:20:23.908770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:20:23.912139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:20:43.888011       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dcbd92b705b37e35dd2979e89cb6160c3c85860c5fdec20b514148132a315d78] <==
	I1119 22:20:25.884756       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:20:25.942685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:20:26.043521       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:20:26.043565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:20:26.043676       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:20:26.067145       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:20:26.067284       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:20:26.074679       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:20:26.075185       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:20:26.075236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:20:26.077549       1 config.go:309] "Starting node config controller"
	I1119 22:20:26.077573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:20:26.078974       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:20:26.079007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:20:26.079196       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:20:26.079205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:20:26.079202       1 config.go:200] "Starting service config controller"
	I1119 22:20:26.079241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:20:26.178275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:20:26.179483       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:20:26.179507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:20:26.179518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [159fa612c2cd44cba268f356b3fc242510cdd5755545e3e7616335a46b35eb21] <==
	E1119 22:20:16.916927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:20:16.917081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:20:16.916976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:20:16.917108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:20:16.917101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:20:16.917298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:20:16.917436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:20:16.917436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:20:16.917533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:20:16.917629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:20:16.917669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:20:16.917732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:20:16.917579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:20:16.918257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:20:17.725134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:20:17.775472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:20:17.775472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:20:17.807221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:20:17.827803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:20:17.861330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:20:18.070332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:20:18.081282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:20:18.107459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:20:18.134818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1119 22:20:18.512856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:20:23 no-preload-638439 kubelet[2185]: I1119 22:20:23.902623    2185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783710    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c17a46d4-7b16-4b78-9678-12006f879013-xtables-lock\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783752    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-xtables-lock\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783774    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c17a46d4-7b16-4b78-9678-12006f879013-lib-modules\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783788    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-cni-cfg\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783803    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27hz\" (UniqueName: \"kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783828    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c17a46d4-7b16-4b78-9678-12006f879013-kube-proxy\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.783981    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmcs\" (UniqueName: \"kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs\") pod \"kube-proxy-qvdld\" (UID: \"c17a46d4-7b16-4b78-9678-12006f879013\") " pod="kube-system/kube-proxy-qvdld"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: I1119 22:20:24.784059    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbc3f590-a300-4682-8d67-eb512c60e790-lib-modules\") pod \"kindnet-c88rf\" (UID: \"dbc3f590-a300-4682-8d67-eb512c60e790\") " pod="kube-system/kindnet-c88rf"
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891494    2185 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891535    2185 projected.go:196] Error preparing data for projected volume kube-api-access-2hmcs for pod kube-system/kube-proxy-qvdld: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.891626    2185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs podName:c17a46d4-7b16-4b78-9678-12006f879013 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:25.391595822 +0000 UTC m=+6.117533845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2hmcs" (UniqueName: "kubernetes.io/projected/c17a46d4-7b16-4b78-9678-12006f879013-kube-api-access-2hmcs") pod "kube-proxy-qvdld" (UID: "c17a46d4-7b16-4b78-9678-12006f879013") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893717    2185 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893744    2185 projected.go:196] Error preparing data for projected volume kube-api-access-x27hz for pod kube-system/kindnet-c88rf: configmap "kube-root-ca.crt" not found
	Nov 19 22:20:24 no-preload-638439 kubelet[2185]: E1119 22:20:24.893815    2185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz podName:dbc3f590-a300-4682-8d67-eb512c60e790 nodeName:}" failed. No retries permitted until 2025-11-19 22:20:25.393782921 +0000 UTC m=+6.119720943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x27hz" (UniqueName: "kubernetes.io/projected/dbc3f590-a300-4682-8d67-eb512c60e790-kube-api-access-x27hz") pod "kindnet-c88rf" (UID: "dbc3f590-a300-4682-8d67-eb512c60e790") : configmap "kube-root-ca.crt" not found
	Nov 19 22:20:26 no-preload-638439 kubelet[2185]: I1119 22:20:26.409477    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvdld" podStartSLOduration=2.409454745 podStartE2EDuration="2.409454745s" podCreationTimestamp="2025-11-19 22:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:26.409252865 +0000 UTC m=+7.135190902" watchObservedRunningTime="2025-11-19 22:20:26.409454745 +0000 UTC m=+7.135392788"
	Nov 19 22:20:29 no-preload-638439 kubelet[2185]: I1119 22:20:29.471871    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c88rf" podStartSLOduration=2.605197213 podStartE2EDuration="5.471851095s" podCreationTimestamp="2025-11-19 22:20:24 +0000 UTC" firstStartedPulling="2025-11-19 22:20:25.936462764 +0000 UTC m=+6.662400777" lastFinishedPulling="2025-11-19 22:20:28.803116645 +0000 UTC m=+9.529054659" observedRunningTime="2025-11-19 22:20:29.409950648 +0000 UTC m=+10.135888679" watchObservedRunningTime="2025-11-19 22:20:29.471851095 +0000 UTC m=+10.197789127"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.347033    2185 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488416    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a-config-volume\") pod \"coredns-66bc5c9577-82hpr\" (UID: \"1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a\") " pod="kube-system/coredns-66bc5c9577-82hpr"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488709    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm88r\" (UniqueName: \"kubernetes.io/projected/1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a-kube-api-access-nm88r\") pod \"coredns-66bc5c9577-82hpr\" (UID: \"1ec1d37f-c58e-4e2d-aa92-81eb01e0fb9a\") " pod="kube-system/coredns-66bc5c9577-82hpr"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488789    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0414837f-ea33-47da-b64c-cdb22e9f1040-tmp\") pod \"storage-provisioner\" (UID: \"0414837f-ea33-47da-b64c-cdb22e9f1040\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:39 no-preload-638439 kubelet[2185]: I1119 22:20:39.488810    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxkh\" (UniqueName: \"kubernetes.io/projected/0414837f-ea33-47da-b64c-cdb22e9f1040-kube-api-access-lgxkh\") pod \"storage-provisioner\" (UID: \"0414837f-ea33-47da-b64c-cdb22e9f1040\") " pod="kube-system/storage-provisioner"
	Nov 19 22:20:40 no-preload-638439 kubelet[2185]: I1119 22:20:40.436087    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-82hpr" podStartSLOduration=15.436064241 podStartE2EDuration="15.436064241s" podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:40.435825083 +0000 UTC m=+21.161763113" watchObservedRunningTime="2025-11-19 22:20:40.436064241 +0000 UTC m=+21.162002273"
	Nov 19 22:20:40 no-preload-638439 kubelet[2185]: I1119 22:20:40.456073    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.456052647 podStartE2EDuration="15.456052647s" podCreationTimestamp="2025-11-19 22:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:20:40.446030606 +0000 UTC m=+21.171968636" watchObservedRunningTime="2025-11-19 22:20:40.456052647 +0000 UTC m=+21.181990679"
	Nov 19 22:20:42 no-preload-638439 kubelet[2185]: I1119 22:20:42.707092    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsksb\" (UniqueName: \"kubernetes.io/projected/7de716fc-5cc0-401e-af15-e754abb3f8ee-kube-api-access-xsksb\") pod \"busybox\" (UID: \"7de716fc-5cc0-401e-af15-e754abb3f8ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [d8860258a82e4c779916142960fa4bafe11aac24b228816c847879aa27146f91] <==
	I1119 22:20:39.918121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:20:39.928301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:20:39.928385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:20:39.931595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:39.939073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:20:39.939256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:20:39.939410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea0f5240-138c-44d8-830f-af4064436d86", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f became leader
	I1119 22:20:39.939439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f!
	W1119 22:20:39.943065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:39.951033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:20:40.039590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-638439_b327f305-43e2-481a-83b9-0eb1dba2136f!
	W1119 22:20:41.956172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:41.961423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:43.964285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:43.969049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:45.971705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:45.976626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:47.979993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:47.984149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:49.987200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:49.992727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:51.997100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:52.001077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:54.004611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:20:54.008864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-638439 -n no-preload-638439
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-638439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-299509 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bb1fff85-0367-4004-a462-e99ccd3ceeb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bb1fff85-0367-4004-a462-e99ccd3ceeb3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003650278s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-299509 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-299509
helpers_test.go:243: (dbg) docker inspect embed-certs-299509:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b",
	        "Created": "2025-11-19T22:22:01.188638615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:22:01.237383145Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/hosts",
	        "LogPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b-json.log",
	        "Name": "/embed-certs-299509",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-299509:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-299509",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b",
	                "LowerDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/merged",
	                "UpperDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/diff",
	                "WorkDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-299509",
	                "Source": "/var/lib/docker/volumes/embed-certs-299509/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-299509",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-299509",
	                "name.minikube.sigs.k8s.io": "embed-certs-299509",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fef08dfcdb6bcaff8dd6f7c67530a8173d7a0d0114a4d82b68265e8ae516e37b",
	            "SandboxKey": "/var/run/docker/netns/fef08dfcdb6b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-299509": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0aa8a9c74cb17c34fdff9b7cd85f2551ab3dbab0447c24a33d9c9e57813d5094",
	                    "EndpointID": "b36f417bccba7fce7a10e7235d1d2e9314070a1b269362f9c55518b1934c8df6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "42:ab:ee:91:57:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-299509",
	                        "8c914f6ed883"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299509 -n embed-certs-299509
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299509 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-299509 logs -n 25: (1.169442367s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-975700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p old-k8s-version-975700 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-975700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-638439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p no-preload-638439 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:21 UTC │
	│ addons  │ enable dashboard -p no-preload-638439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ image   │ old-k8s-version-975700 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ pause   │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ unpause │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:22 UTC │
	│ image   │ no-preload-638439 image list --format=json                                                                                                                                                                                                          │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ pause   │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ unpause │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p disable-driver-mounts-837642                                                                                                                                                                                                                     │ disable-driver-mounts-837642 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409240 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ delete  │ -p cert-expiration-207460                                                                                                                                                                                                                           │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:22:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:22:24.161753  280330 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:22:24.162111  280330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:22:24.162126  280330 out.go:374] Setting ErrFile to fd 2...
	I1119 22:22:24.162134  280330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:22:24.162460  280330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:22:24.164759  280330 out.go:368] Setting JSON to false
	I1119 22:22:24.166474  280330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3884,"bootTime":1763587060,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:22:24.166610  280330 start.go:143] virtualization: kvm guest
	I1119 22:22:24.168838  280330 out.go:179] * [newest-cni-982287] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:22:24.172664  280330 notify.go:221] Checking for updates...
	I1119 22:22:24.172695  280330 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:22:24.174491  280330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:22:24.175742  280330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:22:24.177038  280330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:22:24.178419  280330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:22:24.179831  280330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:22:24.181589  280330 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.181772  280330 config.go:182] Loaded profile config "embed-certs-299509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.181940  280330 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.182095  280330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:22:24.210716  280330 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:22:24.210847  280330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:22:24.285322  280330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:22:24.267236293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:22:24.285434  280330 docker.go:319] overlay module found
	I1119 22:22:24.288716  280330 out.go:179] * Using the docker driver based on user configuration
	I1119 22:22:24.290115  280330 start.go:309] selected driver: docker
	I1119 22:22:24.290136  280330 start.go:930] validating driver "docker" against <nil>
	I1119 22:22:24.290156  280330 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:22:24.290864  280330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:22:24.356561  280330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:22:24.346163396 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:22:24.356761  280330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 22:22:24.356795  280330 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 22:22:24.357205  280330 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:22:24.359530  280330 out.go:179] * Using Docker driver with root privileges
	I1119 22:22:24.360851  280330 cni.go:84] Creating CNI manager for ""
	I1119 22:22:24.360927  280330 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:24.360960  280330 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:22:24.361032  280330 start.go:353] cluster config:
	{Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:24.363356  280330 out.go:179] * Starting "newest-cni-982287" primary control-plane node in "newest-cni-982287" cluster
	I1119 22:22:24.364859  280330 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:22:24.366384  280330 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:22:24.367705  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:24.367771  280330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 22:22:24.367787  280330 cache.go:65] Caching tarball of preloaded images
	I1119 22:22:24.367824  280330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:22:24.367892  280330 preload.go:238] Found /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 22:22:24.367908  280330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:22:24.368018  280330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json ...
	I1119 22:22:24.368040  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json: {Name:mkb02b749fc99339e72978c4ec7a212ddec516c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:24.391802  280330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:22:24.391822  280330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:22:24.391838  280330 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:22:24.391873  280330 start.go:360] acquireMachinesLock for newest-cni-982287: {Name:mke27c2b85aec9405ad5413bcb0f1bda4c4bbb7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:22:24.392018  280330 start.go:364] duration metric: took 98.197µs to acquireMachinesLock for "newest-cni-982287"
	I1119 22:22:24.392049  280330 start.go:93] Provisioning new machine with config: &{Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:22:24.392139  280330 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:22:21.688654  276591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.845223655s)
	I1119 22:22:21.688691  276591 kic.go:203] duration metric: took 4.845376641s to extract preloaded images to volume ...
	W1119 22:22:21.688779  276591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:22:21.688827  276591 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:22:21.688871  276591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:22:21.756090  276591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-409240 --name default-k8s-diff-port-409240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-409240 --network default-k8s-diff-port-409240 --ip 192.168.103.2 --volume default-k8s-diff-port-409240:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:22:22.206834  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Running}}
	I1119 22:22:22.243232  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.272068  276591 cli_runner.go:164] Run: docker exec default-k8s-diff-port-409240 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:22:22.343028  276591 oci.go:144] the created container "default-k8s-diff-port-409240" has a running status.
	I1119 22:22:22.343065  276591 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa...
	I1119 22:22:22.554014  276591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:22:22.588222  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.618774  276591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:22:22.618798  276591 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-409240 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:22:22.678316  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.704960  276591 machine.go:94] provisionDockerMachine start ...
	I1119 22:22:22.705112  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:22.728714  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:22.729061  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:22.729078  276591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:22:22.868504  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409240
	
	I1119 22:22:22.868533  276591 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409240"
	I1119 22:22:22.868583  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:22.891991  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:22.892307  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:22.892335  276591 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409240 && echo "default-k8s-diff-port-409240" | sudo tee /etc/hostname
	I1119 22:22:23.041410  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409240
	
	I1119 22:22:23.041575  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.064046  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:23.064278  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:23.064306  276591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:22:23.198838  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:22:23.198866  276591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:22:23.198908  276591 ubuntu.go:190] setting up certificates
	I1119 22:22:23.198921  276591 provision.go:84] configureAuth start
	I1119 22:22:23.198971  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.217760  276591 provision.go:143] copyHostCerts
	I1119 22:22:23.217831  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:22:23.217844  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:22:23.217943  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:22:23.218061  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:22:23.218073  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:22:23.218119  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:22:23.218199  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:22:23.218210  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:22:23.218242  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:22:23.218316  276591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409240 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-409240 localhost minikube]
	I1119 22:22:23.274597  276591 provision.go:177] copyRemoteCerts
	I1119 22:22:23.274661  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:22:23.274717  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.295581  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.391822  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:22:23.412051  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:22:23.430680  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:22:23.462047  276591 provision.go:87] duration metric: took 263.112413ms to configureAuth
	I1119 22:22:23.462082  276591 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:22:23.462267  276591 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:23.462282  276591 machine.go:97] duration metric: took 757.266023ms to provisionDockerMachine
	I1119 22:22:23.462291  276591 client.go:176] duration metric: took 7.278239396s to LocalClient.Create
	I1119 22:22:23.462316  276591 start.go:167] duration metric: took 7.278303414s to libmachine.API.Create "default-k8s-diff-port-409240"
	I1119 22:22:23.462329  276591 start.go:293] postStartSetup for "default-k8s-diff-port-409240" (driver="docker")
	I1119 22:22:23.462347  276591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:22:23.462408  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:22:23.462454  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.484075  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.588498  276591 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:22:23.592579  276591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:22:23.592603  276591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:22:23.592613  276591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:22:23.592656  276591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:22:23.592742  276591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:22:23.592831  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:22:23.601335  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:23.627689  276591 start.go:296] duration metric: took 165.338567ms for postStartSetup
	I1119 22:22:23.628117  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.653523  276591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/config.json ...
	I1119 22:22:23.654543  276591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:22:23.654587  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.674215  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.766409  276591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:22:23.771916  276591 start.go:128] duration metric: took 7.591286902s to createHost
	I1119 22:22:23.771940  276591 start.go:83] releasing machines lock for "default-k8s-diff-port-409240", held for 7.591415686s
	I1119 22:22:23.772001  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.794108  276591 ssh_runner.go:195] Run: cat /version.json
	I1119 22:22:23.794164  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.794183  276591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:22:23.794255  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.830841  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.835377  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.927000  276591 ssh_runner.go:195] Run: systemctl --version
	I1119 22:22:24.012697  276591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:22:24.018691  276591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:22:24.018756  276591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:22:24.055860  276591 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:22:24.055902  276591 start.go:496] detecting cgroup driver to use...
	I1119 22:22:24.055996  276591 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:22:24.056062  276591 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:22:24.073778  276591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:22:24.087562  276591 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:22:24.087619  276591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:22:24.106056  276591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:22:24.126564  276591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:22:24.227013  276591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:22:24.341035  276591 docker.go:234] disabling docker service ...
	I1119 22:22:24.341101  276591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:22:24.363772  276591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:22:24.378070  276591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:22:24.487114  276591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:22:24.583821  276591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:22:24.597532  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:22:24.614001  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:22:24.627405  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:22:24.636866  276591 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:22:24.636942  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:22:24.646877  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:24.657015  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:22:24.666697  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:24.678202  276591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:22:24.687477  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:22:24.698728  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:22:24.709124  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:22:24.719391  276591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:22:24.728022  276591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:22:24.736167  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:24.841027  276591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:22:24.953379  276591 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:22:24.953453  276591 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:22:24.957773  276591 start.go:564] Will wait 60s for crictl version
	I1119 22:22:24.957840  276591 ssh_runner.go:195] Run: which crictl
	I1119 22:22:24.961692  276591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:22:24.991056  276591 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:22:24.991121  276591 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:25.015332  276591 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:25.040622  276591 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:22:20.522712  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:20.523297  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:20.523347  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:22:20.523395  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:22:20.551613  216336 cri.go:89] found id: "672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:20.551633  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:20.551638  216336 cri.go:89] found id: ""
	I1119 22:22:20.551645  216336 logs.go:282] 2 containers: [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:22:20.551689  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.555787  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.560093  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:22:20.560165  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:22:20.588254  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:20.588273  216336 cri.go:89] found id: ""
	I1119 22:22:20.588280  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:22:20.588332  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.592566  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:22:20.592647  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:22:20.619583  216336 cri.go:89] found id: ""
	I1119 22:22:20.619604  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.619611  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:22:20.619617  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:22:20.619671  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:22:20.646478  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:20.646500  216336 cri.go:89] found id: ""
	I1119 22:22:20.646511  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:22:20.646574  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.651611  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:22:20.651676  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:22:20.678618  216336 cri.go:89] found id: ""
	I1119 22:22:20.678643  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.678654  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:22:20.678663  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:22:20.678721  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:22:20.705401  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:20.705427  216336 cri.go:89] found id: ""
	I1119 22:22:20.705437  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:22:20.705503  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.709579  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:22:20.709632  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:22:20.740166  216336 cri.go:89] found id: ""
	I1119 22:22:20.740192  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.740203  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:22:20.740210  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:22:20.740266  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:22:20.770279  216336 cri.go:89] found id: ""
	I1119 22:22:20.770300  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.770308  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:22:20.770321  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:22:20.770335  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:22:20.785998  216336 logs.go:123] Gathering logs for kube-apiserver [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59] ...
	I1119 22:22:20.786036  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:20.822426  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:22:20.822457  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:20.862380  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:22:20.862419  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:20.901714  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:22:20.901751  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:20.935491  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:22:20.935523  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:22:20.967640  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:22:20.967676  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:22:21.050652  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:22:21.050681  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:22:21.050693  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:21.085685  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:22:21.085717  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:22:21.132329  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:22:21.132367  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:22:23.736997  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:23.737408  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:23.737456  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:22:23.737501  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:22:23.765875  216336 cri.go:89] found id: "672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:23.765911  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:23.765916  216336 cri.go:89] found id: ""
	I1119 22:22:23.765924  216336 logs.go:282] 2 containers: [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:22:23.765980  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.770141  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.774064  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:22:23.774126  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:22:23.825762  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:23.825789  216336 cri.go:89] found id: ""
	I1119 22:22:23.825799  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:22:23.825855  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.831125  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:22:23.831183  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:22:23.862766  216336 cri.go:89] found id: ""
	I1119 22:22:23.862792  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.862800  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:22:23.862806  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:22:23.862864  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:22:23.891863  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:23.891896  216336 cri.go:89] found id: ""
	I1119 22:22:23.891907  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:22:23.891977  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.896561  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:22:23.896633  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:22:23.928120  216336 cri.go:89] found id: ""
	I1119 22:22:23.928144  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.928154  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:22:23.928161  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:22:23.928213  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:22:23.960778  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:23.960807  216336 cri.go:89] found id: ""
	I1119 22:22:23.960817  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:22:23.960920  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.965121  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:22:23.965194  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:22:23.993822  216336 cri.go:89] found id: ""
	I1119 22:22:23.993850  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.993859  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:22:23.993867  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:22:23.993944  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:22:24.024263  216336 cri.go:89] found id: ""
	I1119 22:22:24.024283  216336 logs.go:282] 0 containers: []
	W1119 22:22:24.024290  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:22:24.024310  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:22:24.024324  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:22:24.038915  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:22:24.038941  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:22:24.114084  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:22:24.114104  216336 logs.go:123] Gathering logs for kube-apiserver [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59] ...
	I1119 22:22:24.114118  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:24.154032  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:22:24.154069  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:24.198232  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:22:24.198262  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:22:24.238737  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:22:24.238778  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:22:24.372597  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:22:24.372630  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:24.411157  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:22:24.411194  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:24.453553  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:22:24.453595  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:24.496874  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:22:24.496926  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:22:22.537776  271072 addons.go:515] duration metric: took 642.983318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:22:22.793543  271072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-299509" context rescaled to 1 replicas
	W1119 22:22:24.294493  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:25.042632  276591 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:25.062187  276591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:22:25.066834  276591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:25.079613  276591 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:22:25.079801  276591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:25.079953  276591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:25.105677  276591 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:25.105695  276591 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:22:25.105737  276591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:25.134959  276591 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:25.134980  276591 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:22:25.134988  276591 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 containerd true true} ...
	I1119 22:22:25.135069  276591 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-409240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:22:25.135113  276591 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:22:25.167678  276591 cni.go:84] Creating CNI manager for ""
	I1119 22:22:25.167709  276591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:25.167729  276591 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:22:25.167757  276591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409240 NodeName:default-k8s-diff-port-409240 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:22:25.167924  276591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-409240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:22:25.168000  276591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:22:25.177102  276591 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:22:25.177167  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:22:25.185659  276591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
	I1119 22:22:25.201528  276591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:22:25.219604  276591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1119 22:22:25.234045  276591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:22:25.238413  276591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:25.250526  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:25.343442  276591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:25.371812  276591 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240 for IP: 192.168.103.2
	I1119 22:22:25.371837  276591 certs.go:195] generating shared ca certs ...
	I1119 22:22:25.371858  276591 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.372058  276591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:22:25.372131  276591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:22:25.372150  276591 certs.go:257] generating profile certs ...
	I1119 22:22:25.372238  276591 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key
	I1119 22:22:25.372266  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt with IP's: []
	I1119 22:22:25.631136  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt ...
	I1119 22:22:25.631165  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt: {Name:mk5f39f8d1a37a2e94108e0d9a32b5b6758e90b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.631331  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key ...
	I1119 22:22:25.631347  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key: {Name:mkb9a1787bba9fa4e7734f7dc514abd509a689a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.631432  276591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6
	I1119 22:22:25.631451  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:22:24.394388  280330 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:22:24.394785  280330 start.go:159] libmachine.API.Create for "newest-cni-982287" (driver="docker")
	I1119 22:22:24.394824  280330 client.go:173] LocalClient.Create starting
	I1119 22:22:24.394987  280330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem
	I1119 22:22:24.395041  280330 main.go:143] libmachine: Decoding PEM data...
	I1119 22:22:24.395067  280330 main.go:143] libmachine: Parsing certificate...
	I1119 22:22:24.395137  280330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem
	I1119 22:22:24.395166  280330 main.go:143] libmachine: Decoding PEM data...
	I1119 22:22:24.395182  280330 main.go:143] libmachine: Parsing certificate...
	I1119 22:22:24.395629  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:22:24.423746  280330 cli_runner.go:211] docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:22:24.423844  280330 network_create.go:284] running [docker network inspect newest-cni-982287] to gather additional debugging logs...
	I1119 22:22:24.423868  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287
	W1119 22:22:24.445080  280330 cli_runner.go:211] docker network inspect newest-cni-982287 returned with exit code 1
	I1119 22:22:24.445120  280330 network_create.go:287] error running [docker network inspect newest-cni-982287]: docker network inspect newest-cni-982287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-982287 not found
	I1119 22:22:24.445136  280330 network_create.go:289] output of [docker network inspect newest-cni-982287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-982287 not found
	
	** /stderr **
	I1119 22:22:24.445260  280330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:24.466599  280330 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
	I1119 22:22:24.467401  280330 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-474134d72c89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:14:41:ce:21:e4} reservation:<nil>}
	I1119 22:22:24.468189  280330 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-527206f47d61 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:ef:fd:4c:e4:1b} reservation:<nil>}
	I1119 22:22:24.469003  280330 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac16fd64007f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:dc:21:09:78:e5} reservation:<nil>}
	I1119 22:22:24.470218  280330 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14120}
	I1119 22:22:24.470248  280330 network_create.go:124] attempt to create docker network newest-cni-982287 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:22:24.470315  280330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-982287 newest-cni-982287
	I1119 22:22:24.533470  280330 network_create.go:108] docker network newest-cni-982287 192.168.85.0/24 created
	I1119 22:22:24.533519  280330 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-982287" container
	I1119 22:22:24.533610  280330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:22:24.554770  280330 cli_runner.go:164] Run: docker volume create newest-cni-982287 --label name.minikube.sigs.k8s.io=newest-cni-982287 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:22:24.574773  280330 oci.go:103] Successfully created a docker volume newest-cni-982287
	I1119 22:22:24.574875  280330 cli_runner.go:164] Run: docker run --rm --name newest-cni-982287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-982287 --entrypoint /usr/bin/test -v newest-cni-982287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:22:25.015010  280330 oci.go:107] Successfully prepared a docker volume newest-cni-982287
	I1119 22:22:25.015083  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:25.015097  280330 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:22:25.015177  280330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-982287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:22:27.052619  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:27.053060  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:27.053134  216336 kubeadm.go:602] duration metric: took 4m8.100180752s to restartPrimaryControlPlane
	W1119 22:22:27.053205  216336 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1119 22:22:27.053270  216336 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1119 22:22:29.595586  216336 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.542287897s)
	I1119 22:22:29.595659  216336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:22:29.613831  216336 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:29.624808  216336 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:29.624878  216336 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:29.634866  216336 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:29.634910  216336 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:29.634958  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:22:29.643935  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:29.643998  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:29.651967  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:22:29.661497  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:29.661562  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:29.671814  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:22:29.680395  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:29.680451  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:29.688981  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:22:29.697200  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:29.697265  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:29.704874  216336 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:29.744838  216336 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:29.744909  216336 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:29.766513  216336 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:29.766576  216336 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:29.766615  216336 kubeadm.go:319] OS: Linux
	I1119 22:22:29.766720  216336 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:29.766817  216336 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:29.766949  216336 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:29.767034  216336 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:29.767118  216336 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:29.767204  216336 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:29.767301  216336 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:29.767401  216336 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:29.845500  216336 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:29.845633  216336 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:29.845758  216336 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	W1119 22:22:26.794071  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	W1119 22:22:28.794688  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:26.025597  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 ...
	I1119 22:22:26.025625  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6: {Name:mkd4a17b950761c17a5f1c485097fe70aeb7115f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.025780  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6 ...
	I1119 22:22:26.025793  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6: {Name:mkce196bc6f1621a4671932273b821505129c4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.025863  276591 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt
	I1119 22:22:26.025977  276591 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key
	I1119 22:22:26.026038  276591 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key
	I1119 22:22:26.026053  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt with IP's: []
	I1119 22:22:26.110242  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt ...
	I1119 22:22:26.110266  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt: {Name:mkd034f4ab2e71e3031349036ccdc11118b20207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.110421  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key ...
	I1119 22:22:26.110435  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key: {Name:mkc40aef0c1ddcba5bcb699a18bcc20385df9b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.110627  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:22:26.110662  276591 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:22:26.110670  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:22:26.110694  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:22:26.110715  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:22:26.110735  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:22:26.110776  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:26.111384  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:22:26.130746  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:22:26.150827  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:22:26.169590  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:22:26.189660  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:22:26.209597  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:22:26.228793  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:22:26.247579  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:22:26.266234  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:22:26.289361  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:22:26.311453  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:22:26.333771  276591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:22:26.348643  276591 ssh_runner.go:195] Run: openssl version
	I1119 22:22:26.355320  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:22:26.365196  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.369774  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.369837  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.407001  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:22:26.416898  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:22:26.426385  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.431341  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.431409  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.468064  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:22:26.478073  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:22:26.487541  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.492229  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.492294  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.529743  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:22:26.539856  276591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:22:26.544020  276591 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:22:26.544094  276591 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:26.544240  276591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:22:26.544323  276591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:22:26.575244  276591 cri.go:89] found id: ""
	I1119 22:22:26.575301  276591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:22:26.584632  276591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:26.593649  276591 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:26.593719  276591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:26.603310  276591 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:26.603333  276591 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:26.603381  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:22:26.612040  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:26.612101  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:26.620326  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:22:26.630737  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:26.630811  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:26.639701  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:22:26.648722  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:26.648783  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:26.659210  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:22:26.667820  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:26.667894  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:26.676498  276591 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:26.743561  276591 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:22:26.810970  276591 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:22:29.544819  280330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-982287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.529563131s)
	I1119 22:22:29.544856  280330 kic.go:203] duration metric: took 4.529754174s to extract preloaded images to volume ...
	W1119 22:22:29.544960  280330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:22:29.545008  280330 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:22:29.545056  280330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:22:29.612696  280330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-982287 --name newest-cni-982287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-982287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-982287 --network newest-cni-982287 --ip 192.168.85.2 --volume newest-cni-982287:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:22:29.949075  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Running}}
	I1119 22:22:29.969100  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:29.989712  280330 cli_runner.go:164] Run: docker exec newest-cni-982287 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:22:30.039140  280330 oci.go:144] the created container "newest-cni-982287" has a running status.
	I1119 22:22:30.039169  280330 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa...
	I1119 22:22:30.133567  280330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:22:30.163995  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:30.186505  280330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:22:30.186531  280330 kic_runner.go:114] Args: [docker exec --privileged newest-cni-982287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:22:30.238099  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:30.260123  280330 machine.go:94] provisionDockerMachine start ...
	I1119 22:22:30.260253  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:30.289537  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:30.290026  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:30.290051  280330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:22:30.291294  280330 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33964->127.0.0.1:33088: read: connection reset by peer
	I1119 22:22:33.431758  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-982287
	
	I1119 22:22:33.431790  280330 ubuntu.go:182] provisioning hostname "newest-cni-982287"
	I1119 22:22:33.431854  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:33.453700  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:33.453982  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:33.453999  280330 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-982287 && echo "newest-cni-982287" | sudo tee /etc/hostname
	I1119 22:22:33.617167  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-982287
	
	I1119 22:22:33.617241  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:33.653151  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:33.653455  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:33.653482  280330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-982287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-982287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-982287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:22:33.806363  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:22:33.806400  280330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:22:33.806428  280330 ubuntu.go:190] setting up certificates
	I1119 22:22:33.806442  280330 provision.go:84] configureAuth start
	I1119 22:22:33.806525  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:33.830568  280330 provision.go:143] copyHostCerts
	I1119 22:22:33.830645  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:22:33.830657  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:22:33.830744  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:22:33.830891  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:22:33.830904  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:22:33.830955  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:22:33.831091  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:22:33.831103  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:22:33.831143  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:22:33.831238  280330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.newest-cni-982287 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-982287]
	W1119 22:22:31.293984  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	W1119 22:22:33.294755  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:33.793952  271072 node_ready.go:49] node "embed-certs-299509" is "Ready"
	I1119 22:22:33.794000  271072 node_ready.go:38] duration metric: took 11.5033648s for node "embed-certs-299509" to be "Ready" ...
	I1119 22:22:33.794017  271072 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:22:33.794073  271072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:22:33.810709  271072 api_server.go:72] duration metric: took 11.915955391s to wait for apiserver process to appear ...
	I1119 22:22:33.810742  271072 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:22:33.810771  271072 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:22:33.815794  271072 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 22:22:33.817268  271072 api_server.go:141] control plane version: v1.34.1
	I1119 22:22:33.817298  271072 api_server.go:131] duration metric: took 6.547094ms to wait for apiserver health ...
	I1119 22:22:33.817307  271072 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:22:33.821284  271072 system_pods.go:59] 8 kube-system pods found
	I1119 22:22:33.821325  271072 system_pods.go:61] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:33.821335  271072 system_pods.go:61] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:33.821344  271072 system_pods.go:61] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:33.821351  271072 system_pods.go:61] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:33.821358  271072 system_pods.go:61] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:33.821362  271072 system_pods.go:61] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:33.821365  271072 system_pods.go:61] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:33.821373  271072 system_pods.go:61] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:33.821385  271072 system_pods.go:74] duration metric: took 4.070526ms to wait for pod list to return data ...
	I1119 22:22:33.821399  271072 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:22:33.825243  271072 default_sa.go:45] found service account: "default"
	I1119 22:22:33.825273  271072 default_sa.go:55] duration metric: took 3.86707ms for default service account to be created ...
	I1119 22:22:33.825463  271072 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:22:33.828586  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:33.828618  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:33.828627  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:33.828634  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:33.828640  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:33.828646  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:33.828651  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:33.828657  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:33.828665  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:33.828695  271072 retry.go:31] will retry after 283.925445ms: missing components: kube-dns
	I1119 22:22:34.118106  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.118143  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.118150  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.118156  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.118160  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.118164  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.118167  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.118170  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.118175  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:34.118188  271072 retry.go:31] will retry after 317.330113ms: missing components: kube-dns
	I1119 22:22:34.439211  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.439242  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.439248  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.439260  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.439264  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.439270  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.439274  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.439279  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.439287  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:34.439308  271072 retry.go:31] will retry after 343.185922ms: missing components: kube-dns
	I1119 22:22:34.787502  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.787548  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.787559  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.787569  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.787575  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.787582  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.787586  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.787591  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.787596  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Running
	I1119 22:22:34.787606  271072 system_pods.go:126] duration metric: took 962.135619ms to wait for k8s-apps to be running ...
	I1119 22:22:34.787616  271072 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:22:34.787667  271072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:22:34.806841  271072 system_svc.go:56] duration metric: took 19.214772ms WaitForService to wait for kubelet
	I1119 22:22:34.806915  271072 kubeadm.go:587] duration metric: took 12.912189637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:22:34.806938  271072 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:22:34.810623  271072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:22:34.810654  271072 node_conditions.go:123] node cpu capacity is 8
	I1119 22:22:34.810671  271072 node_conditions.go:105] duration metric: took 3.728429ms to run NodePressure ...
	I1119 22:22:34.810685  271072 start.go:242] waiting for startup goroutines ...
	I1119 22:22:34.810694  271072 start.go:247] waiting for cluster config update ...
	I1119 22:22:34.810707  271072 start.go:256] writing updated cluster config ...
	I1119 22:22:34.811047  271072 ssh_runner.go:195] Run: rm -f paused
	I1119 22:22:34.816231  271072 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:22:34.820920  271072 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dmd59" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.826183  271072 pod_ready.go:94] pod "coredns-66bc5c9577-dmd59" is "Ready"
	I1119 22:22:34.826210  271072 pod_ready.go:86] duration metric: took 5.257551ms for pod "coredns-66bc5c9577-dmd59" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.828987  271072 pod_ready.go:83] waiting for pod "etcd-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.833480  271072 pod_ready.go:94] pod "etcd-embed-certs-299509" is "Ready"
	I1119 22:22:34.833506  271072 pod_ready.go:86] duration metric: took 4.492269ms for pod "etcd-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.836026  271072 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.840861  271072 pod_ready.go:94] pod "kube-apiserver-embed-certs-299509" is "Ready"
	I1119 22:22:34.840946  271072 pod_ready.go:86] duration metric: took 4.894896ms for pod "kube-apiserver-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.843228  271072 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.221427  271072 pod_ready.go:94] pod "kube-controller-manager-embed-certs-299509" is "Ready"
	I1119 22:22:35.221457  271072 pod_ready.go:86] duration metric: took 378.200798ms for pod "kube-controller-manager-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.421075  271072 pod_ready.go:83] waiting for pod "kube-proxy-b7gxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.820517  271072 pod_ready.go:94] pod "kube-proxy-b7gxk" is "Ready"
	I1119 22:22:35.820542  271072 pod_ready.go:86] duration metric: took 399.44003ms for pod "kube-proxy-b7gxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.022309  271072 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.421735  271072 pod_ready.go:94] pod "kube-scheduler-embed-certs-299509" is "Ready"
	I1119 22:22:36.421766  271072 pod_ready.go:86] duration metric: took 399.426239ms for pod "kube-scheduler-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.421782  271072 pod_ready.go:40] duration metric: took 1.605467692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:22:36.482197  271072 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:22:36.483810  271072 out.go:179] * Done! kubectl is now configured to use "embed-certs-299509" cluster and "default" namespace by default
	I1119 22:22:34.707484  280330 provision.go:177] copyRemoteCerts
	I1119 22:22:34.707566  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:22:34.707606  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:34.725423  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:34.826470  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:22:34.856115  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:22:34.879845  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:22:34.905572  280330 provision.go:87] duration metric: took 1.099102161s to configureAuth
	I1119 22:22:34.905604  280330 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:22:34.906171  280330 config.go:182] Loaded profile config "newest-cni-982287": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:34.906210  280330 machine.go:97] duration metric: took 4.64606146s to provisionDockerMachine
	I1119 22:22:34.906220  280330 client.go:176] duration metric: took 10.511385903s to LocalClient.Create
	I1119 22:22:34.906250  280330 start.go:167] duration metric: took 10.511463988s to libmachine.API.Create "newest-cni-982287"
	I1119 22:22:34.906263  280330 start.go:293] postStartSetup for "newest-cni-982287" (driver="docker")
	I1119 22:22:34.906275  280330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:22:34.906335  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:22:34.906379  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:34.931452  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.040926  280330 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:22:35.045946  280330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:22:35.045989  280330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:22:35.046003  280330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:22:35.046060  280330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:22:35.046153  280330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:22:35.046274  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:22:35.056558  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:35.084815  280330 start.go:296] duration metric: took 178.536573ms for postStartSetup
	I1119 22:22:35.085278  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:35.110307  280330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json ...
	I1119 22:22:35.110611  280330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:22:35.110657  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.135042  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.237701  280330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:22:35.243727  280330 start.go:128] duration metric: took 10.851573045s to createHost
	I1119 22:22:35.243757  280330 start.go:83] releasing machines lock for "newest-cni-982287", held for 10.851724024s
	I1119 22:22:35.243839  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:35.267661  280330 ssh_runner.go:195] Run: cat /version.json
	I1119 22:22:35.267708  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.267778  280330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:22:35.268212  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.292111  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.292342  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.467290  280330 ssh_runner.go:195] Run: systemctl --version
	I1119 22:22:35.474736  280330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:22:35.480229  280330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:22:35.480307  280330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:22:35.506774  280330 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:22:35.506798  280330 start.go:496] detecting cgroup driver to use...
	I1119 22:22:35.506827  280330 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:22:35.506865  280330 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:22:35.521633  280330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:22:35.534995  280330 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:22:35.535054  280330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:22:35.555622  280330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:22:35.573509  280330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:22:35.675129  280330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:22:35.771490  280330 docker.go:234] disabling docker service ...
	I1119 22:22:35.771557  280330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:22:35.791577  280330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:22:35.805345  280330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:22:35.893395  280330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:22:35.989514  280330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:22:36.006433  280330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:22:36.028122  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:22:36.042021  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:22:36.052163  280330 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:22:36.052246  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:22:36.062904  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:36.075204  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:22:36.087574  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:36.101783  280330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:22:36.110858  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:22:36.121164  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:22:36.132874  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:22:36.144559  280330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:22:36.154816  280330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:22:36.164165  280330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:36.271146  280330 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:22:36.418497  280330 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:22:36.418560  280330 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:22:36.424387  280330 start.go:564] Will wait 60s for crictl version
	I1119 22:22:36.424446  280330 ssh_runner.go:195] Run: which crictl
	I1119 22:22:36.429706  280330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:22:36.466769  280330 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:22:36.466921  280330 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:36.495723  280330 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:36.527630  280330 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:22:36.529205  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:36.552951  280330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:22:36.558500  280330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:36.575956  280330 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:22:36.991156  276591 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:36.991226  276591 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:36.991344  276591 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:36.991405  276591 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:36.991453  276591 kubeadm.go:319] OS: Linux
	I1119 22:22:36.991524  276591 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:36.991602  276591 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:36.991674  276591 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:36.991786  276591 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:36.991927  276591 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:36.992022  276591 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:36.992091  276591 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:36.992170  276591 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:36.992270  276591 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:36.992410  276591 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:36.992549  276591 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:22:36.992628  276591 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:36.994336  276591 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:36.994448  276591 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:36.994539  276591 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:36.994630  276591 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:22:36.994708  276591 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:22:36.994792  276591 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:22:36.994862  276591 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:22:36.995098  276591 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:22:36.995346  276591 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-409240 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:22:36.995425  276591 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:22:36.995644  276591 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-409240 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:22:36.995739  276591 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:22:36.995835  276591 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:22:36.995944  276591 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:22:36.996019  276591 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:36.996083  276591 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:36.996167  276591 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:36.996240  276591 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:36.996352  276591 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:36.996422  276591 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:36.996535  276591 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:36.996630  276591 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:36.998058  276591 out.go:252]   - Booting up control plane ...
	I1119 22:22:36.998164  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:36.998358  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:36.998492  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:36.998683  276591 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:36.998845  276591 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:36.999001  276591 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:36.999127  276591 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:36.999233  276591 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:36.999428  276591 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:36.999581  276591 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:36.999680  276591 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.059271ms
	I1119 22:22:36.999921  276591 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:37.000056  276591 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1119 22:22:37.000195  276591 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:37.000314  276591 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:37.000421  276591 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.132785513s
	I1119 22:22:37.000532  276591 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.806845653s
	I1119 22:22:37.000619  276591 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002186224s
	I1119 22:22:37.000748  276591 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:22:37.000918  276591 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:22:37.000991  276591 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:22:37.001247  276591 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-409240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:22:37.001330  276591 kubeadm.go:319] [bootstrap-token] Using token: jt6zlp.9o8ngv3uv5w6cuhp
	I1119 22:22:37.003023  276591 out.go:252]   - Configuring RBAC rules ...
	I1119 22:22:37.003118  276591 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:22:37.003186  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:22:37.003306  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:22:37.003409  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:22:37.003506  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:22:37.003574  276591 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:22:37.003667  276591 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:22:37.003702  276591 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:22:37.003739  276591 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:22:37.003742  276591 kubeadm.go:319] 
	I1119 22:22:37.003790  276591 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:22:37.003793  276591 kubeadm.go:319] 
	I1119 22:22:37.003856  276591 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:22:37.003859  276591 kubeadm.go:319] 
	I1119 22:22:37.003892  276591 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:22:37.003964  276591 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:22:37.004023  276591 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:22:37.004033  276591 kubeadm.go:319] 
	I1119 22:22:37.004094  276591 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:22:37.004099  276591 kubeadm.go:319] 
	I1119 22:22:37.004147  276591 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:22:37.004151  276591 kubeadm.go:319] 
	I1119 22:22:37.004214  276591 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:22:37.004319  276591 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:22:37.004407  276591 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:22:37.004414  276591 kubeadm.go:319] 
	I1119 22:22:37.004523  276591 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:22:37.004621  276591 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:22:37.004631  276591 kubeadm.go:319] 
	I1119 22:22:37.004735  276591 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token jt6zlp.9o8ngv3uv5w6cuhp \
	I1119 22:22:37.004929  276591 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:22:37.004991  276591 kubeadm.go:319] 	--control-plane 
	I1119 22:22:37.005007  276591 kubeadm.go:319] 
	I1119 22:22:37.005153  276591 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:22:37.005176  276591 kubeadm.go:319] 
	I1119 22:22:37.005312  276591 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token jt6zlp.9o8ngv3uv5w6cuhp \
	I1119 22:22:37.005533  276591 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:22:37.005580  276591 cni.go:84] Creating CNI manager for ""
	I1119 22:22:37.005601  276591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:37.007317  276591 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:22:38.064085  216336 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:36.577259  280330 kubeadm.go:884] updating cluster {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:22:36.577421  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:36.577484  280330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:36.612845  280330 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:36.612874  280330 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:22:36.612983  280330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:36.648084  280330 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:36.648113  280330 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:22:36.648122  280330 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:22:36.648329  280330 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-982287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:22:36.648455  280330 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:22:36.695583  280330 cni.go:84] Creating CNI manager for ""
	I1119 22:22:36.695610  280330 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:36.695628  280330 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:22:36.695670  280330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-982287 NodeName:newest-cni-982287 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:22:36.695944  280330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-982287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:22:36.696033  280330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:22:36.706266  280330 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:22:36.706329  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:22:36.715401  280330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 22:22:36.730738  280330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:22:36.749872  280330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 22:22:36.765531  280330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:22:36.770080  280330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:36.782743  280330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:36.889055  280330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:36.923087  280330 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287 for IP: 192.168.85.2
	I1119 22:22:36.923112  280330 certs.go:195] generating shared ca certs ...
	I1119 22:22:36.923134  280330 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:36.923314  280330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:22:36.923364  280330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:22:36.923373  280330 certs.go:257] generating profile certs ...
	I1119 22:22:36.923440  280330 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key
	I1119 22:22:36.923461  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt with IP's: []
	I1119 22:22:37.181324  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt ...
	I1119 22:22:37.181356  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt: {Name:mkb01b8326784e66b7df5ab019ef6110c6c012ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.181574  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key ...
	I1119 22:22:37.181594  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key: {Name:mk51e15b78fbe125c718c897366ec099f68b0cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.181715  280330 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082
	I1119 22:22:37.181737  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:22:37.329761  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 ...
	I1119 22:22:37.329790  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082: {Name:mk8b10d40b3a22fe4e2dc15032ab661d54c098d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.330014  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082 ...
	I1119 22:22:37.330044  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082: {Name:mk3a61ee08cec0f79ada62e8fb29583cd21e7bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.330156  280330 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt
	I1119 22:22:37.330250  280330 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key
	I1119 22:22:37.330322  280330 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key
	I1119 22:22:37.330338  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt with IP's: []
	I1119 22:22:37.771878  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt ...
	I1119 22:22:37.771917  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt: {Name:mk58761e0e6ee7737a83048777838b9aec8854a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.801265  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key ...
	I1119 22:22:37.801303  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key: {Name:mk552820dd03268dd56a26bab7595fafc18517aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.801622  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:22:37.801678  280330 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:22:37.801689  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:22:37.801717  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:22:37.801748  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:22:37.801779  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:22:37.801829  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:37.802674  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:22:37.875734  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:22:37.894413  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:22:37.926996  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:22:37.947970  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:22:37.967508  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:22:37.988163  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:22:38.007952  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:22:38.026511  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:22:38.063287  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:22:38.082212  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:22:38.102014  280330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:22:38.115085  280330 ssh_runner.go:195] Run: openssl version
	I1119 22:22:38.121839  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:22:38.131648  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.135595  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.135658  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.172211  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:22:38.182532  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:22:38.191695  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.196453  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.196511  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.234306  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:22:38.244590  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:22:38.254043  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.258079  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.258138  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.295672  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:22:38.304737  280330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:22:38.308765  280330 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:22:38.308829  280330 kubeadm.go:401] StartCluster: {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:38.308934  280330 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:22:38.309011  280330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:22:38.338594  280330 cri.go:89] found id: ""
	I1119 22:22:38.338656  280330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:22:38.347771  280330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:38.356084  280330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:38.356152  280330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:38.364447  280330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:38.364471  280330 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:38.364519  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:22:38.372662  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:38.372725  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:38.380501  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:22:38.388307  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:38.388354  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:38.396282  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:22:38.403906  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:38.403965  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:38.411846  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:22:38.419497  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:38.419562  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:38.427419  280330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:38.471999  280330 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:38.472102  280330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:38.506349  280330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:38.506471  280330 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:38.506528  280330 kubeadm.go:319] OS: Linux
	I1119 22:22:38.506608  280330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:38.506687  280330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:38.506757  280330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:38.506827  280330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:38.506912  280330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:38.506978  280330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:38.507044  280330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:38.507104  280330 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:38.578369  280330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:38.578545  280330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:38.578669  280330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:22:38.583992  280330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:38.586974  280330 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:38.587060  280330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:38.587142  280330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:38.772077  280330 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:22:39.010675  280330 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:22:38.065634  216336 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:38.065746  216336 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:38.065840  216336 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:38.065978  216336 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1119 22:22:38.066092  216336 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1119 22:22:38.066189  216336 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1119 22:22:38.066274  216336 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1119 22:22:38.066365  216336 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1119 22:22:38.066473  216336 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1119 22:22:38.066586  216336 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1119 22:22:38.066708  216336 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1119 22:22:38.066765  216336 kubeadm.go:319] [certs] Using the existing "sa" key
	I1119 22:22:38.066841  216336 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:38.249871  216336 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:38.510034  216336 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:38.953480  216336 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:39.188274  216336 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:39.320580  216336 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:39.321267  216336 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:39.324043  216336 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:39.326001  216336 out.go:252]   - Booting up control plane ...
	I1119 22:22:39.326149  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:39.326288  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:39.327083  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:39.351951  216336 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:39.352138  216336 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:39.360710  216336 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:39.360981  216336 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:39.361053  216336 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:39.495517  216336 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:39.495708  216336 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:37.008922  276591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:22:37.014610  276591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:22:37.014639  276591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:22:37.031868  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:22:37.329119  276591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:22:37.329194  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:37.329194  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-409240 minikube.k8s.io/updated_at=2025_11_19T22_22_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-409240 minikube.k8s.io/primary=true
	I1119 22:22:37.342712  276591 ops.go:34] apiserver oom_adj: -16
	I1119 22:22:37.420487  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:37.921546  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:38.421061  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:38.921271  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.421135  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.920573  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:40.421572  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:40.921543  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.291147  280330 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:22:39.803989  280330 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:22:39.857608  280330 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:22:39.857805  280330 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-982287] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:22:40.046677  280330 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:22:40.047344  280330 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-982287] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:22:40.324316  280330 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:22:40.485707  280330 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:22:40.758234  280330 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:22:40.758548  280330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:40.887155  280330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:40.966155  280330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:41.277055  280330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:41.449006  280330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:41.880741  280330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:41.881629  280330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:41.887692  280330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:41.421113  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:41.921101  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:42.420622  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:42.522253  276591 kubeadm.go:1114] duration metric: took 5.193121012s to wait for elevateKubeSystemPrivileges
	I1119 22:22:42.522299  276591 kubeadm.go:403] duration metric: took 15.978207866s to StartCluster
	I1119 22:22:42.522329  276591 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:42.522413  276591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:22:42.524002  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:42.524286  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:22:42.524308  276591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:22:42.524370  276591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:22:42.524471  276591 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-409240"
	I1119 22:22:42.524490  276591 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-409240"
	I1119 22:22:42.524493  276591 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:42.524509  276591 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-409240"
	I1119 22:22:42.524557  276591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-409240"
	I1119 22:22:42.524581  276591 host.go:66] Checking if "default-k8s-diff-port-409240" exists ...
	I1119 22:22:42.524977  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.525109  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.526483  276591 out.go:179] * Verifying Kubernetes components...
	I1119 22:22:42.528866  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:42.598272  276591 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-409240"
	I1119 22:22:42.598516  276591 host.go:66] Checking if "default-k8s-diff-port-409240" exists ...
	I1119 22:22:42.599725  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.603372  276591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:22:42.604788  276591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:42.604968  276591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:22:42.605059  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:42.635283  276591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:42.635309  276591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:22:42.636085  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:42.641698  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:42.672358  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:42.680798  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:22:42.762621  276591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:42.807288  276591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:42.828033  276591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:42.981686  276591 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:22:42.983391  276591 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-409240" to be "Ready" ...
	I1119 22:22:43.217993  276591 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:22:41.891465  280330 out.go:252]   - Booting up control plane ...
	I1119 22:22:41.891604  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:41.891699  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:41.891777  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:41.910166  280330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:41.910315  280330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:41.917973  280330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:41.918251  280330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:41.918334  280330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:42.066811  280330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:42.066999  280330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:43.067875  280330 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001203567s
	I1119 22:22:43.071192  280330 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:43.071317  280330 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:22:43.071435  280330 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:43.071538  280330 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:40.497124  216336 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001690602s
	I1119 22:22:40.500502  216336 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:40.500633  216336 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 22:22:40.500767  216336 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:40.500906  216336 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:41.881178  216336 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.38034623s
	I1119 22:22:42.959692  216336 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.458959453s
	I1119 22:22:45.003285  216336 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501853641s
	I1119 22:22:45.016734  216336 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:22:45.033443  216336 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:22:45.043685  216336 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:22:45.044006  216336 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-133839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:22:45.053396  216336 kubeadm.go:319] [bootstrap-token] Using token: piifbg.8xlm8l44mj6waatg
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d8328ecf166b5       56cc512116c8f       6 seconds ago       Running             busybox                   0                   eb1640aaa1d2a       busybox                                      default
	c978f4fb9e859       52546a367cc9e       11 seconds ago      Running             coredns                   0                   bacdff11be525       coredns-66bc5c9577-dmd59                     kube-system
	ef38515030366       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   764b37d3ae033       storage-provisioner                          kube-system
	f9ca6afe443ef       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   32897e54ab6f4       kindnet-st248                                kube-system
	e99f92f9441eb       fc25172553d79       23 seconds ago      Running             kube-proxy                0                   cee5ddf5f99fa       kube-proxy-b7gxk                             kube-system
	979e5f09853d6       7dd6aaa1717ab       33 seconds ago      Running             kube-scheduler            0                   e1f4dc800232c       kube-scheduler-embed-certs-299509            kube-system
	00b0e185d9e85       c80c8dbafe7dd       33 seconds ago      Running             kube-controller-manager   0                   6d621fcd51faa       kube-controller-manager-embed-certs-299509   kube-system
	cc73d1161063d       c3994bc696102       33 seconds ago      Running             kube-apiserver            0                   bc6d35a584260       kube-apiserver-embed-certs-299509            kube-system
	32d631be779d9       5f1f5298c888d       33 seconds ago      Running             etcd                      0                   7979510379921       etcd-embed-certs-299509                      kube-system
	
	
	==> containerd <==
	Nov 19 22:22:33 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:33.973974130Z" level=info msg="connecting to shim ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c" address="unix:///run/containerd/s/eaf2199e68c707984e371883acb9b067310c430ccd07b64261836ee8335f62d1" protocol=ttrpc version=3
	Nov 19 22:22:33 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:33.997465041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmd59,Uid:2c555b78-b464-40e7-be35-c2b2286321ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.016381792Z" level=info msg="CreateContainer within sandbox \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.026048816Z" level=info msg="Container c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.032584126Z" level=info msg="StartContainer for \"ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c\" returns successfully"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.035922910Z" level=info msg="CreateContainer within sandbox \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.036530808Z" level=info msg="StartContainer for \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.037558106Z" level=info msg="connecting to shim c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82" address="unix:///run/containerd/s/76edba81499667cae3998dc46fe4ad9fce3bbf71d4176b1c0b966787c64dd424" protocol=ttrpc version=3
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.095375936Z" level=info msg="StartContainer for \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\" returns successfully"
	Nov 19 22:22:36 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:36.990781626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bb1fff85-0367-4004-a462-e99ccd3ceeb3,Namespace:default,Attempt:0,}"
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.035902567Z" level=info msg="connecting to shim eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57" address="unix:///run/containerd/s/81defd07493ca4a9062a6edc54a757892e618deedef99502bebefb1276a5ff57" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.126605399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bb1fff85-0367-4004-a462-e99ccd3ceeb3,Namespace:default,Attempt:0,} returns sandbox id \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\""
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.129653934Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.584163458Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.584917463Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.586325608Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588213148Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588582882Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.458858165s"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588640335Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.592843046Z" level=info msg="CreateContainer within sandbox \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.600595370Z" level=info msg="Container d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.607485080Z" level=info msg="CreateContainer within sandbox \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.608187133Z" level=info msg="StartContainer for \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.609033179Z" level=info msg="connecting to shim d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac" address="unix:///run/containerd/s/81defd07493ca4a9062a6edc54a757892e618deedef99502bebefb1276a5ff57" protocol=ttrpc version=3
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.661255060Z" level=info msg="StartContainer for \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\" returns successfully"
	
	
	==> coredns [c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52135 - 57985 "HINFO IN 5202741956818390714.7152830120362697649. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02797357s
	
	
	==> describe nodes <==
	Name:               embed-certs-299509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-299509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-299509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_22_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-299509
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:22:33 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:22:33 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:22:33 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:22:33 +0000   Wed, 19 Nov 2025 22:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-299509
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                89d06ccb-f6da-4042-90eb-6aa22f98b648
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-dmd59                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-299509                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-st248                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-299509             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-299509    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-b7gxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-299509             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-299509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-299509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node embed-certs-299509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-299509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-299509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-299509 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-299509 event: Registered Node embed-certs-299509 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-299509 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [32d631be779d9fe68adb18531d9e8cc4ca0f6f57219fa3343bca45c04f81b0f6] <==
	{"level":"warn","ts":"2025-11-19T22:22:21.270385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.706956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T22:22:21.270455Z","caller":"traceutil/trace.go:172","msg":"trace[680321887] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:329; }","duration":"227.803365ms","start":"2025-11-19T22:22:21.042637Z","end":"2025-11-19T22:22:21.270441Z","steps":["trace[680321887] 'agreement among raft nodes before linearized reading'  (duration: 101.99388ms)","trace[680321887] 'range keys from in-memory index tree'  (duration: 125.627346ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.270394Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.65563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766285565613889 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-299509.1879889df1c3cc39\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-299509.1879889df1c3cc39\" value_size:621 lease:6571766285565613236 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:22:21.270643Z","caller":"traceutil/trace.go:172","msg":"trace[1101998133] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"228.533731ms","start":"2025-11-19T22:22:21.042094Z","end":"2025-11-19T22:22:21.270628Z","steps":["trace[1101998133] 'process raft request'  (duration: 102.591256ms)","trace[1101998133] 'compare'  (duration: 125.534963ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:21.271605Z","caller":"traceutil/trace.go:172","msg":"trace[902524464] linearizableReadLoop","detail":"{readStateIndex:340; appliedIndex:340; }","duration":"126.985068ms","start":"2025-11-19T22:22:21.144605Z","end":"2025-11-19T22:22:21.271591Z","steps":["trace[902524464] 'read index received'  (duration: 126.962851ms)","trace[902524464] 'applied index is now lower than readState.Index'  (duration: 20.876µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.271774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.630867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T22:22:21.271818Z","caller":"traceutil/trace.go:172","msg":"trace[595777696] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:331; }","duration":"228.685668ms","start":"2025-11-19T22:22:21.043122Z","end":"2025-11-19T22:22:21.271808Z","steps":["trace[595777696] 'agreement among raft nodes before linearized reading'  (duration: 228.556693ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.271839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.680356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-19T22:22:21.271909Z","caller":"traceutil/trace.go:172","msg":"trace[503953989] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"222.810074ms","start":"2025-11-19T22:22:21.049087Z","end":"2025-11-19T22:22:21.271897Z","steps":["trace[503953989] 'process raft request'  (duration: 222.504014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.271777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.150751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-11-19T22:22:21.271943Z","caller":"traceutil/trace.go:172","msg":"trace[1943891992] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:331; }","duration":"134.80824ms","start":"2025-11-19T22:22:21.137118Z","end":"2025-11-19T22:22:21.271926Z","steps":["trace[1943891992] 'agreement among raft nodes before linearized reading'  (duration: 134.486718ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.271963Z","caller":"traceutil/trace.go:172","msg":"trace[369689261] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:331; }","duration":"185.33093ms","start":"2025-11-19T22:22:21.086608Z","end":"2025-11-19T22:22:21.271939Z","steps":["trace[369689261] 'agreement among raft nodes before linearized reading'  (duration: 185.04183ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272177Z","caller":"traceutil/trace.go:172","msg":"trace[1590633831] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"222.289369ms","start":"2025-11-19T22:22:21.049877Z","end":"2025-11-19T22:22:21.272166Z","steps":["trace[1590633831] 'process raft request'  (duration: 222.100441ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272178Z","caller":"traceutil/trace.go:172","msg":"trace[423127262] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"184.016417ms","start":"2025-11-19T22:22:21.088148Z","end":"2025-11-19T22:22:21.272165Z","steps":["trace[423127262] 'process raft request'  (duration: 183.91276ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272502Z","caller":"traceutil/trace.go:172","msg":"trace[1537717388] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"223.136566ms","start":"2025-11-19T22:22:21.049345Z","end":"2025-11-19T22:22:21.272481Z","steps":["trace[1537717388] 'process raft request'  (duration: 222.337069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.379297Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.536498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:1 size:2313"}
	{"level":"info","ts":"2025-11-19T22:22:21.379372Z","caller":"traceutil/trace.go:172","msg":"trace[1561772185] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:334; }","duration":"100.624296ms","start":"2025-11-19T22:22:21.278732Z","end":"2025-11-19T22:22:21.379356Z","steps":["trace[1561772185] 'agreement among raft nodes before linearized reading'  (duration: 94.263188ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.379526Z","caller":"traceutil/trace.go:172","msg":"trace[1771335165] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"100.790863ms","start":"2025-11-19T22:22:21.278716Z","end":"2025-11-19T22:22:21.379507Z","steps":["trace[1771335165] 'process raft request'  (duration: 94.244352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.662980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.756192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T22:22:21.663157Z","caller":"traceutil/trace.go:172","msg":"trace[1223092418] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:336; }","duration":"175.948113ms","start":"2025-11-19T22:22:21.487187Z","end":"2025-11-19T22:22:21.663135Z","steps":["trace[1223092418] 'agreement among raft nodes before linearized reading'  (duration: 50.541951ms)","trace[1223092418] 'range keys from in-memory index tree'  (duration: 125.090203ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.663359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.320003ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766285565613905 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:335 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:3706 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:22:21.663430Z","caller":"traceutil/trace.go:172","msg":"trace[1411416401] linearizableReadLoop","detail":"{readStateIndex:347; appliedIndex:346; }","duration":"125.718571ms","start":"2025-11-19T22:22:21.537701Z","end":"2025-11-19T22:22:21.663420Z","steps":["trace[1411416401] 'read index received'  (duration: 13.54µs)","trace[1411416401] 'applied index is now lower than readState.Index'  (duration: 125.704264ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:21.663440Z","caller":"traceutil/trace.go:172","msg":"trace[1436096593] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"220.193377ms","start":"2025-11-19T22:22:21.443234Z","end":"2025-11-19T22:22:21.663427Z","steps":["trace[1436096593] 'process raft request'  (duration: 94.549625ms)","trace[1436096593] 'compare'  (duration: 125.198123ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.663916Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.271012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T22:22:21.663966Z","caller":"traceutil/trace.go:172","msg":"trace[1341852426] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:337; }","duration":"126.349526ms","start":"2025-11-19T22:22:21.537605Z","end":"2025-11-19T22:22:21.663954Z","steps":["trace[1341852426] 'agreement among raft nodes before linearized reading'  (duration: 125.85502ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:22:46 up  1:05,  0 user,  load average: 4.78, 3.71, 2.38
	Linux embed-certs-299509 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f9ca6afe443eff6c4850214548f3d18190831351081a79b8efcd17d4127265ff] <==
	I1119 22:22:23.242556       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:22:23.242837       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 22:22:23.243088       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:22:23.243116       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:22:23.243140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:22:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:22:23.446415       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:22:23.446818       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:22:23.447460       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:22:23.447656       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:22:23.847690       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:22:23.940641       1 metrics.go:72] Registering metrics
	I1119 22:22:23.941152       1 controller.go:711] "Syncing nftables rules"
	I1119 22:22:33.447997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:22:33.448068       1 main.go:301] handling current node
	I1119 22:22:43.446324       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:22:43.446356       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc73d1161063d4dbf9949d49375d5baccff72a3ad2ebde0910a448aad00cec6a] <==
	I1119 22:22:13.923253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:22:13.924730       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:13.925565       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:22:13.931751       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:13.931773       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:22:13.933127       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:22:14.110006       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:22:14.825149       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:22:14.829564       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:22:14.829585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:22:15.508150       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:22:15.553481       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:22:15.630445       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:22:15.641307       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 22:22:15.642614       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:22:15.648620       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:22:15.840777       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:22:16.707831       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:22:16.720509       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:22:16.734769       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:22:21.670622       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:22:21.746750       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:21.753846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:21.945379       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:22:44.787345       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:49832: use of closed network connection
	
	
	==> kube-controller-manager [00b0e185d9e859d3c703686e910cdc28f1a3c21c4ad84c64e9872f583393a56e] <==
	I1119 22:22:21.083714       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:22:21.083795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:22:21.083713       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:22:21.083855       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:22:21.083865       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:22:21.083893       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:22:21.089451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:22:21.089644       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:22:21.089972       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:22:21.090102       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:22:21.090669       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:22:21.090703       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:22:21.090780       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:21.090790       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:22:21.090800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:22:21.091857       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:22:21.091903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:22:21.092294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:21.093354       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:22:21.093600       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:22:21.096701       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:22:21.105422       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:21.117079       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:21.274253       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-299509" podCIDRs=["10.244.0.0/24"]
	I1119 22:22:36.043208       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e99f92f9441eba50aa579862f55732c0c715ab1185ac9b748a64f6597c21cd3e] <==
	I1119 22:22:22.733093       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:22:22.799926       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:22:22.901074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:22:22.901121       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 22:22:22.901279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:22:22.926594       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:22:22.926655       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:22:22.932355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:22:22.932783       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:22:22.932817       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:22:22.934682       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:22:22.934875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:22:22.934910       1 config.go:200] "Starting service config controller"
	I1119 22:22:22.934945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:22:22.934964       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:22:22.934974       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:22:22.935006       1 config.go:309] "Starting node config controller"
	I1119 22:22:22.935013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:22:23.035756       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:22:23.035802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:22:23.035774       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:22:23.035800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [979e5f09853d673db16f4f5458dd6ef974350045a743532dfe09873fb6a44243] <==
	E1119 22:22:13.870348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:13.870485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:13.870488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:22:13.870577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:22:13.870831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:13.870856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:13.870871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:13.871013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:13.871294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:22:14.693069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:14.704384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:14.705402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:22:14.750865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:22:14.840749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:14.847092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:14.895857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:22:14.972872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:14.989326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:22:15.054782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:22:15.062386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:22:15.078092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:15.152679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:15.254176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:15.271651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:22:16.967246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.655664    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-299509" podStartSLOduration=1.655639949 podStartE2EDuration="1.655639949s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.642862964 +0000 UTC m=+1.171882551" watchObservedRunningTime="2025-11-19 22:22:17.655639949 +0000 UTC m=+1.184659537"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.672191    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-299509" podStartSLOduration=1.6721652169999999 podStartE2EDuration="1.672165217s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.655838038 +0000 UTC m=+1.184857611" watchObservedRunningTime="2025-11-19 22:22:17.672165217 +0000 UTC m=+1.201184803"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.692583    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-299509" podStartSLOduration=1.6924579199999998 podStartE2EDuration="1.69245792s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.672742323 +0000 UTC m=+1.201761910" watchObservedRunningTime="2025-11-19 22:22:17.69245792 +0000 UTC m=+1.221477506"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.695109    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-299509" podStartSLOduration=1.695085562 podStartE2EDuration="1.695085562s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.693709128 +0000 UTC m=+1.222728722" watchObservedRunningTime="2025-11-19 22:22:17.695085562 +0000 UTC m=+1.224105149"
	Nov 19 22:22:21 embed-certs-299509 kubelet[1460]: I1119 22:22:21.276541    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:22:21 embed-certs-299509 kubelet[1460]: I1119 22:22:21.277398    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106241    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc0c848b-ceac-4473-8a9f-42665ee25a5b-kube-proxy\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106303    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc0c848b-ceac-4473-8a9f-42665ee25a5b-xtables-lock\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106326    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc0c848b-ceac-4473-8a9f-42665ee25a5b-lib-modules\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106357    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5mg\" (UniqueName: \"kubernetes.io/projected/fc0c848b-ceac-4473-8a9f-42665ee25a5b-kube-api-access-5t5mg\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106383    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-xtables-lock\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106408    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-cni-cfg\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106431    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-lib-modules\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106456    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5nw\" (UniqueName: \"kubernetes.io/projected/0f41e3e9-9f6f-4f28-a037-373cc6996455-kube-api-access-lg5nw\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:23 embed-certs-299509 kubelet[1460]: I1119 22:22:23.634905    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7gxk" podStartSLOduration=2.6348579279999997 podStartE2EDuration="2.634857928s" podCreationTimestamp="2025-11-19 22:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:23.634459761 +0000 UTC m=+7.163479350" watchObservedRunningTime="2025-11-19 22:22:23.634857928 +0000 UTC m=+7.163877515"
	Nov 19 22:22:23 embed-certs-299509 kubelet[1460]: I1119 22:22:23.648626    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-st248" podStartSLOduration=2.648600591 podStartE2EDuration="2.648600591s" podCreationTimestamp="2025-11-19 22:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:23.648350989 +0000 UTC m=+7.177370575" watchObservedRunningTime="2025-11-19 22:22:23.648600591 +0000 UTC m=+7.177620178"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.504398    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.588726    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8272\" (UniqueName: \"kubernetes.io/projected/2c555b78-b464-40e7-be35-c2b2286321ab-kube-api-access-b8272\") pod \"coredns-66bc5c9577-dmd59\" (UID: \"2c555b78-b464-40e7-be35-c2b2286321ab\") " pod="kube-system/coredns-66bc5c9577-dmd59"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589040    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/87ae0335-b9d0-4969-8fd0-febca42399e1-tmp\") pod \"storage-provisioner\" (UID: \"87ae0335-b9d0-4969-8fd0-febca42399e1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589078    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c555b78-b464-40e7-be35-c2b2286321ab-config-volume\") pod \"coredns-66bc5c9577-dmd59\" (UID: \"2c555b78-b464-40e7-be35-c2b2286321ab\") " pod="kube-system/coredns-66bc5c9577-dmd59"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589105    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfsd\" (UniqueName: \"kubernetes.io/projected/87ae0335-b9d0-4969-8fd0-febca42399e1-kube-api-access-wgfsd\") pod \"storage-provisioner\" (UID: \"87ae0335-b9d0-4969-8fd0-febca42399e1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:34 embed-certs-299509 kubelet[1460]: I1119 22:22:34.735707    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dmd59" podStartSLOduration=12.735683653 podStartE2EDuration="12.735683653s" podCreationTimestamp="2025-11-19 22:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:34.735414493 +0000 UTC m=+18.264434080" watchObservedRunningTime="2025-11-19 22:22:34.735683653 +0000 UTC m=+18.264703241"
	Nov 19 22:22:34 embed-certs-299509 kubelet[1460]: I1119 22:22:34.792568    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.792544284 podStartE2EDuration="12.792544284s" podCreationTimestamp="2025-11-19 22:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:34.777928592 +0000 UTC m=+18.306948393" watchObservedRunningTime="2025-11-19 22:22:34.792544284 +0000 UTC m=+18.321563871"
	Nov 19 22:22:36 embed-certs-299509 kubelet[1460]: I1119 22:22:36.717940    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxp6\" (UniqueName: \"kubernetes.io/projected/bb1fff85-0367-4004-a462-e99ccd3ceeb3-kube-api-access-dgxp6\") pod \"busybox\" (UID: \"bb1fff85-0367-4004-a462-e99ccd3ceeb3\") " pod="default/busybox"
	Nov 19 22:22:39 embed-certs-299509 kubelet[1460]: I1119 22:22:39.675004    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.213846338 podStartE2EDuration="3.674985904s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="2025-11-19 22:22:37.128554557 +0000 UTC m=+20.657574128" lastFinishedPulling="2025-11-19 22:22:39.589694129 +0000 UTC m=+23.118713694" observedRunningTime="2025-11-19 22:22:39.67448623 +0000 UTC m=+23.203505818" watchObservedRunningTime="2025-11-19 22:22:39.674985904 +0000 UTC m=+23.204005490"
	
	
	==> storage-provisioner [ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c] <==
	I1119 22:22:34.043729       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:22:34.054355       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:22:34.054397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:22:34.056832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:34.062389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:34.062655       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:22:34.062785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3efdb2d9-5b85-4442-8675-f3018b942da4", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1 became leader
	I1119 22:22:34.062900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1!
	W1119 22:22:34.065632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:34.068997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:34.163624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1!
	W1119 22:22:36.073007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:36.078611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:38.082224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:38.086304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:40.089677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:40.095726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:42.100265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:42.105517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:44.110538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:44.115648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:46.119456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:46.124346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299509 -n embed-certs-299509
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-299509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-299509
helpers_test.go:243: (dbg) docker inspect embed-certs-299509:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b",
	        "Created": "2025-11-19T22:22:01.188638615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:22:01.237383145Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/hosts",
	        "LogPath": "/var/lib/docker/containers/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b/8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b-json.log",
	        "Name": "/embed-certs-299509",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-299509:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-299509",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c914f6ed883b40972e81b2ba6077f6dafca54021f10a22c1638b53636970d5b",
	                "LowerDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/merged",
	                "UpperDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/diff",
	                "WorkDir": "/var/lib/docker/overlay2/090a077d4867dc9f58314a1bc1d4b6ba4cb458dfc507ac1cde0f19a4105d8462/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-299509",
	                "Source": "/var/lib/docker/volumes/embed-certs-299509/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-299509",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-299509",
	                "name.minikube.sigs.k8s.io": "embed-certs-299509",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fef08dfcdb6bcaff8dd6f7c67530a8173d7a0d0114a4d82b68265e8ae516e37b",
	            "SandboxKey": "/var/run/docker/netns/fef08dfcdb6b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-299509": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0aa8a9c74cb17c34fdff9b7cd85f2551ab3dbab0447c24a33d9c9e57813d5094",
	                    "EndpointID": "b36f417bccba7fce7a10e7235d1d2e9314070a1b269362f9c55518b1934c8df6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "42:ab:ee:91:57:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-299509",
	                        "8c914f6ed883"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299509 -n embed-certs-299509
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299509 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-299509 logs -n 25: (1.219197563s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:19 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-975700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p old-k8s-version-975700 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-975700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ start   │ -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-638439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:20 UTC │
	│ stop    │ -p no-preload-638439 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:20 UTC │ 19 Nov 25 22:21 UTC │
	│ addons  │ enable dashboard -p no-preload-638439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ image   │ old-k8s-version-975700 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ pause   │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ unpause │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:22 UTC │
	│ image   │ no-preload-638439 image list --format=json                                                                                                                                                                                                          │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ pause   │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ unpause │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p disable-driver-mounts-837642                                                                                                                                                                                                                     │ disable-driver-mounts-837642 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409240 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ delete  │ -p cert-expiration-207460                                                                                                                                                                                                                           │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:22:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:22:24.161753  280330 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:22:24.162111  280330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:22:24.162126  280330 out.go:374] Setting ErrFile to fd 2...
	I1119 22:22:24.162134  280330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:22:24.162460  280330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:22:24.164759  280330 out.go:368] Setting JSON to false
	I1119 22:22:24.166474  280330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3884,"bootTime":1763587060,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:22:24.166610  280330 start.go:143] virtualization: kvm guest
	I1119 22:22:24.168838  280330 out.go:179] * [newest-cni-982287] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:22:24.172664  280330 notify.go:221] Checking for updates...
	I1119 22:22:24.172695  280330 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:22:24.174491  280330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:22:24.175742  280330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:22:24.177038  280330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:22:24.178419  280330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:22:24.179831  280330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:22:24.181589  280330 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.181772  280330 config.go:182] Loaded profile config "embed-certs-299509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.181940  280330 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:24.182095  280330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:22:24.210716  280330 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:22:24.210847  280330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:22:24.285322  280330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:22:24.267236293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:22:24.285434  280330 docker.go:319] overlay module found
	I1119 22:22:24.288716  280330 out.go:179] * Using the docker driver based on user configuration
	I1119 22:22:24.290115  280330 start.go:309] selected driver: docker
	I1119 22:22:24.290136  280330 start.go:930] validating driver "docker" against <nil>
	I1119 22:22:24.290156  280330 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:22:24.290864  280330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:22:24.356561  280330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:22:24.346163396 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:22:24.356761  280330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 22:22:24.356795  280330 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 22:22:24.357205  280330 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:22:24.359530  280330 out.go:179] * Using Docker driver with root privileges
	I1119 22:22:24.360851  280330 cni.go:84] Creating CNI manager for ""
	I1119 22:22:24.360927  280330 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:24.360960  280330 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:22:24.361032  280330 start.go:353] cluster config:
	{Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:24.363356  280330 out.go:179] * Starting "newest-cni-982287" primary control-plane node in "newest-cni-982287" cluster
	I1119 22:22:24.364859  280330 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:22:24.366384  280330 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:22:24.367705  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:24.367771  280330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 22:22:24.367787  280330 cache.go:65] Caching tarball of preloaded images
	I1119 22:22:24.367824  280330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:22:24.367892  280330 preload.go:238] Found /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 22:22:24.367908  280330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:22:24.368018  280330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json ...
	I1119 22:22:24.368040  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json: {Name:mkb02b749fc99339e72978c4ec7a212ddec516c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:24.391802  280330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:22:24.391822  280330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:22:24.391838  280330 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:22:24.391873  280330 start.go:360] acquireMachinesLock for newest-cni-982287: {Name:mke27c2b85aec9405ad5413bcb0f1bda4c4bbb7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:22:24.392018  280330 start.go:364] duration metric: took 98.197µs to acquireMachinesLock for "newest-cni-982287"
	I1119 22:22:24.392049  280330 start.go:93] Provisioning new machine with config: &{Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:22:24.392139  280330 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:22:21.688654  276591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.845223655s)
	I1119 22:22:21.688691  276591 kic.go:203] duration metric: took 4.845376641s to extract preloaded images to volume ...
	W1119 22:22:21.688779  276591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:22:21.688827  276591 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:22:21.688871  276591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:22:21.756090  276591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-409240 --name default-k8s-diff-port-409240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-409240 --network default-k8s-diff-port-409240 --ip 192.168.103.2 --volume default-k8s-diff-port-409240:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:22:22.206834  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Running}}
	I1119 22:22:22.243232  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.272068  276591 cli_runner.go:164] Run: docker exec default-k8s-diff-port-409240 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:22:22.343028  276591 oci.go:144] the created container "default-k8s-diff-port-409240" has a running status.
	I1119 22:22:22.343065  276591 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa...
	I1119 22:22:22.554014  276591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:22:22.588222  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.618774  276591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:22:22.618798  276591 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-409240 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:22:22.678316  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:22.704960  276591 machine.go:94] provisionDockerMachine start ...
	I1119 22:22:22.705112  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:22.728714  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:22.729061  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:22.729078  276591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:22:22.868504  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409240
	
	I1119 22:22:22.868533  276591 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409240"
	I1119 22:22:22.868583  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:22.891991  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:22.892307  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:22.892335  276591 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409240 && echo "default-k8s-diff-port-409240" | sudo tee /etc/hostname
	I1119 22:22:23.041410  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409240
	
	I1119 22:22:23.041575  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.064046  276591 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:23.064278  276591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1119 22:22:23.064306  276591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:22:23.198838  276591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:22:23.198866  276591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:22:23.198908  276591 ubuntu.go:190] setting up certificates
	I1119 22:22:23.198921  276591 provision.go:84] configureAuth start
	I1119 22:22:23.198971  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.217760  276591 provision.go:143] copyHostCerts
	I1119 22:22:23.217831  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:22:23.217844  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:22:23.217943  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:22:23.218061  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:22:23.218073  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:22:23.218119  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:22:23.218199  276591 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:22:23.218210  276591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:22:23.218242  276591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:22:23.218316  276591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409240 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-409240 localhost minikube]
	I1119 22:22:23.274597  276591 provision.go:177] copyRemoteCerts
	I1119 22:22:23.274661  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:22:23.274717  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.295581  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.391822  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:22:23.412051  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:22:23.430680  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:22:23.462047  276591 provision.go:87] duration metric: took 263.112413ms to configureAuth
	I1119 22:22:23.462082  276591 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:22:23.462267  276591 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:23.462282  276591 machine.go:97] duration metric: took 757.266023ms to provisionDockerMachine
	I1119 22:22:23.462291  276591 client.go:176] duration metric: took 7.278239396s to LocalClient.Create
	I1119 22:22:23.462316  276591 start.go:167] duration metric: took 7.278303414s to libmachine.API.Create "default-k8s-diff-port-409240"
	I1119 22:22:23.462329  276591 start.go:293] postStartSetup for "default-k8s-diff-port-409240" (driver="docker")
	I1119 22:22:23.462347  276591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:22:23.462408  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:22:23.462454  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.484075  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.588498  276591 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:22:23.592579  276591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:22:23.592603  276591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:22:23.592613  276591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:22:23.592656  276591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:22:23.592742  276591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:22:23.592831  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:22:23.601335  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:23.627689  276591 start.go:296] duration metric: took 165.338567ms for postStartSetup
	I1119 22:22:23.628117  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.653523  276591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/config.json ...
	I1119 22:22:23.654543  276591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:22:23.654587  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.674215  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.766409  276591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:22:23.771916  276591 start.go:128] duration metric: took 7.591286902s to createHost
	I1119 22:22:23.771940  276591 start.go:83] releasing machines lock for "default-k8s-diff-port-409240", held for 7.591415686s
	I1119 22:22:23.772001  276591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409240
	I1119 22:22:23.794108  276591 ssh_runner.go:195] Run: cat /version.json
	I1119 22:22:23.794164  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.794183  276591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:22:23.794255  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:23.830841  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.835377  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:23.927000  276591 ssh_runner.go:195] Run: systemctl --version
	I1119 22:22:24.012697  276591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:22:24.018691  276591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:22:24.018756  276591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:22:24.055860  276591 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:22:24.055902  276591 start.go:496] detecting cgroup driver to use...
	I1119 22:22:24.055996  276591 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:22:24.056062  276591 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:22:24.073778  276591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:22:24.087562  276591 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:22:24.087619  276591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:22:24.106056  276591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:22:24.126564  276591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:22:24.227013  276591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:22:24.341035  276591 docker.go:234] disabling docker service ...
	I1119 22:22:24.341101  276591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:22:24.363772  276591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:22:24.378070  276591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:22:24.487114  276591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:22:24.583821  276591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:22:24.597532  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:22:24.614001  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:22:24.627405  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:22:24.636866  276591 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:22:24.636942  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:22:24.646877  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:24.657015  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:22:24.666697  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:24.678202  276591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:22:24.687477  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:22:24.698728  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:22:24.709124  276591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:22:24.719391  276591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:22:24.728022  276591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:22:24.736167  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:24.841027  276591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:22:24.953379  276591 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:22:24.953453  276591 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:22:24.957773  276591 start.go:564] Will wait 60s for crictl version
	I1119 22:22:24.957840  276591 ssh_runner.go:195] Run: which crictl
	I1119 22:22:24.961692  276591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:22:24.991056  276591 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:22:24.991121  276591 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:25.015332  276591 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:25.040622  276591 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:22:20.522712  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:20.523297  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:20.523347  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:22:20.523395  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:22:20.551613  216336 cri.go:89] found id: "672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:20.551633  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:20.551638  216336 cri.go:89] found id: ""
	I1119 22:22:20.551645  216336 logs.go:282] 2 containers: [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:22:20.551689  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.555787  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.560093  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:22:20.560165  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:22:20.588254  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:20.588273  216336 cri.go:89] found id: ""
	I1119 22:22:20.588280  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:22:20.588332  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.592566  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:22:20.592647  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:22:20.619583  216336 cri.go:89] found id: ""
	I1119 22:22:20.619604  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.619611  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:22:20.619617  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:22:20.619671  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:22:20.646478  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:20.646500  216336 cri.go:89] found id: ""
	I1119 22:22:20.646511  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:22:20.646574  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.651611  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:22:20.651676  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:22:20.678618  216336 cri.go:89] found id: ""
	I1119 22:22:20.678643  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.678654  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:22:20.678663  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:22:20.678721  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:22:20.705401  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:20.705427  216336 cri.go:89] found id: ""
	I1119 22:22:20.705437  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:22:20.705503  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:20.709579  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:22:20.709632  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:22:20.740166  216336 cri.go:89] found id: ""
	I1119 22:22:20.740192  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.740203  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:22:20.740210  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:22:20.740266  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:22:20.770279  216336 cri.go:89] found id: ""
	I1119 22:22:20.770300  216336 logs.go:282] 0 containers: []
	W1119 22:22:20.770308  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:22:20.770321  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:22:20.770335  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:22:20.785998  216336 logs.go:123] Gathering logs for kube-apiserver [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59] ...
	I1119 22:22:20.786036  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:20.822426  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:22:20.822457  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:20.862380  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:22:20.862419  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:20.901714  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:22:20.901751  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:20.935491  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:22:20.935523  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:22:20.967640  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:22:20.967676  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:22:21.050652  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:22:21.050681  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:22:21.050693  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:21.085685  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:22:21.085717  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:22:21.132329  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:22:21.132367  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:22:23.736997  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:23.737408  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:23.737456  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:22:23.737501  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:22:23.765875  216336 cri.go:89] found id: "672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:23.765911  216336 cri.go:89] found id: "b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:23.765916  216336 cri.go:89] found id: ""
	I1119 22:22:23.765924  216336 logs.go:282] 2 containers: [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42]
	I1119 22:22:23.765980  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.770141  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.774064  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 22:22:23.774126  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:22:23.825762  216336 cri.go:89] found id: "4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:23.825789  216336 cri.go:89] found id: ""
	I1119 22:22:23.825799  216336 logs.go:282] 1 containers: [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc]
	I1119 22:22:23.825855  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.831125  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 22:22:23.831183  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:22:23.862766  216336 cri.go:89] found id: ""
	I1119 22:22:23.862792  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.862800  216336 logs.go:284] No container was found matching "coredns"
	I1119 22:22:23.862806  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:22:23.862864  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:22:23.891863  216336 cri.go:89] found id: "599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:23.891896  216336 cri.go:89] found id: ""
	I1119 22:22:23.891907  216336 logs.go:282] 1 containers: [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0]
	I1119 22:22:23.891977  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.896561  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:22:23.896633  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:22:23.928120  216336 cri.go:89] found id: ""
	I1119 22:22:23.928144  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.928154  216336 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:22:23.928161  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:22:23.928213  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:22:23.960778  216336 cri.go:89] found id: "1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:23.960807  216336 cri.go:89] found id: ""
	I1119 22:22:23.960817  216336 logs.go:282] 1 containers: [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2]
	I1119 22:22:23.960920  216336 ssh_runner.go:195] Run: which crictl
	I1119 22:22:23.965121  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 22:22:23.965194  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:22:23.993822  216336 cri.go:89] found id: ""
	I1119 22:22:23.993850  216336 logs.go:282] 0 containers: []
	W1119 22:22:23.993859  216336 logs.go:284] No container was found matching "kindnet"
	I1119 22:22:23.993867  216336 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:22:23.993944  216336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:22:24.024263  216336 cri.go:89] found id: ""
	I1119 22:22:24.024283  216336 logs.go:282] 0 containers: []
	W1119 22:22:24.024290  216336 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:22:24.024310  216336 logs.go:123] Gathering logs for dmesg ...
	I1119 22:22:24.024324  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:22:24.038915  216336 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:22:24.038941  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:22:24.114084  216336 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:22:24.114104  216336 logs.go:123] Gathering logs for kube-apiserver [672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59] ...
	I1119 22:22:24.114118  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 672f6a49fc2495f7edbf0877ebea2b24dae747ad6b41cdebff881f0f0e4ceb59"
	I1119 22:22:24.154032  216336 logs.go:123] Gathering logs for kube-controller-manager [1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2] ...
	I1119 22:22:24.154069  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1711567faf2a254de8ee6773eb8356d5c9397538723da4b355699cea8ea8aec2"
	I1119 22:22:24.198232  216336 logs.go:123] Gathering logs for container status ...
	I1119 22:22:24.198262  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:22:24.238737  216336 logs.go:123] Gathering logs for kubelet ...
	I1119 22:22:24.238778  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:22:24.372597  216336 logs.go:123] Gathering logs for kube-apiserver [b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42] ...
	I1119 22:22:24.372630  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0bca29d87e3482a47e70637e6082ed5b7723b5f5b5c446ee7e2bec7d66edf42"
	I1119 22:22:24.411157  216336 logs.go:123] Gathering logs for etcd [4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc] ...
	I1119 22:22:24.411194  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4aecac92720a6940a95e679eabf3ee1217afd1b910e328854b6a49a460e2f9dc"
	I1119 22:22:24.453553  216336 logs.go:123] Gathering logs for kube-scheduler [599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0] ...
	I1119 22:22:24.453595  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 599fa15a1b47a9c7bd619d50d6c3ef5df360d8fed59c2ae5ca959bfbaafb91d0"
	I1119 22:22:24.496874  216336 logs.go:123] Gathering logs for containerd ...
	I1119 22:22:24.496926  216336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 22:22:22.537776  271072 addons.go:515] duration metric: took 642.983318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:22:22.793543  271072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-299509" context rescaled to 1 replicas
	W1119 22:22:24.294493  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:25.042632  276591 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:25.062187  276591 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:22:25.066834  276591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:25.079613  276591 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:22:25.079801  276591 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:25.079953  276591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:25.105677  276591 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:25.105695  276591 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:22:25.105737  276591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:25.134959  276591 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:25.134980  276591 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:22:25.134988  276591 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 containerd true true} ...
	I1119 22:22:25.135069  276591 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-409240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:22:25.135113  276591 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:22:25.167678  276591 cni.go:84] Creating CNI manager for ""
	I1119 22:22:25.167709  276591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:25.167729  276591 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:22:25.167757  276591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409240 NodeName:default-k8s-diff-port-409240 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:22:25.167924  276591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-409240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:22:25.168000  276591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:22:25.177102  276591 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:22:25.177167  276591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:22:25.185659  276591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
	I1119 22:22:25.201528  276591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:22:25.219604  276591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1119 22:22:25.234045  276591 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:22:25.238413  276591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:25.250526  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:25.343442  276591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:25.371812  276591 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240 for IP: 192.168.103.2
	I1119 22:22:25.371837  276591 certs.go:195] generating shared ca certs ...
	I1119 22:22:25.371858  276591 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.372058  276591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:22:25.372131  276591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:22:25.372150  276591 certs.go:257] generating profile certs ...
	I1119 22:22:25.372238  276591 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key
	I1119 22:22:25.372266  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt with IP's: []
	I1119 22:22:25.631136  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt ...
	I1119 22:22:25.631165  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.crt: {Name:mk5f39f8d1a37a2e94108e0d9a32b5b6758e90b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.631331  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key ...
	I1119 22:22:25.631347  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/client.key: {Name:mkb9a1787bba9fa4e7734f7dc514abd509a689a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:25.631432  276591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6
	I1119 22:22:25.631451  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:22:24.394388  280330 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:22:24.394785  280330 start.go:159] libmachine.API.Create for "newest-cni-982287" (driver="docker")
	I1119 22:22:24.394824  280330 client.go:173] LocalClient.Create starting
	I1119 22:22:24.394987  280330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem
	I1119 22:22:24.395041  280330 main.go:143] libmachine: Decoding PEM data...
	I1119 22:22:24.395067  280330 main.go:143] libmachine: Parsing certificate...
	I1119 22:22:24.395137  280330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem
	I1119 22:22:24.395166  280330 main.go:143] libmachine: Decoding PEM data...
	I1119 22:22:24.395182  280330 main.go:143] libmachine: Parsing certificate...
	I1119 22:22:24.395629  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:22:24.423746  280330 cli_runner.go:211] docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:22:24.423844  280330 network_create.go:284] running [docker network inspect newest-cni-982287] to gather additional debugging logs...
	I1119 22:22:24.423868  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287
	W1119 22:22:24.445080  280330 cli_runner.go:211] docker network inspect newest-cni-982287 returned with exit code 1
	I1119 22:22:24.445120  280330 network_create.go:287] error running [docker network inspect newest-cni-982287]: docker network inspect newest-cni-982287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-982287 not found
	I1119 22:22:24.445136  280330 network_create.go:289] output of [docker network inspect newest-cni-982287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-982287 not found
	
	** /stderr **
	I1119 22:22:24.445260  280330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:24.466599  280330 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
	I1119 22:22:24.467401  280330 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-474134d72c89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:14:41:ce:21:e4} reservation:<nil>}
	I1119 22:22:24.468189  280330 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-527206f47d61 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:ef:fd:4c:e4:1b} reservation:<nil>}
	I1119 22:22:24.469003  280330 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac16fd64007f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:dc:21:09:78:e5} reservation:<nil>}
	I1119 22:22:24.470218  280330 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14120}
	I1119 22:22:24.470248  280330 network_create.go:124] attempt to create docker network newest-cni-982287 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:22:24.470315  280330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-982287 newest-cni-982287
	I1119 22:22:24.533470  280330 network_create.go:108] docker network newest-cni-982287 192.168.85.0/24 created
	I1119 22:22:24.533519  280330 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-982287" container
	I1119 22:22:24.533610  280330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:22:24.554770  280330 cli_runner.go:164] Run: docker volume create newest-cni-982287 --label name.minikube.sigs.k8s.io=newest-cni-982287 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:22:24.574773  280330 oci.go:103] Successfully created a docker volume newest-cni-982287
	I1119 22:22:24.574875  280330 cli_runner.go:164] Run: docker run --rm --name newest-cni-982287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-982287 --entrypoint /usr/bin/test -v newest-cni-982287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:22:25.015010  280330 oci.go:107] Successfully prepared a docker volume newest-cni-982287
	I1119 22:22:25.015083  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:25.015097  280330 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:22:25.015177  280330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-982287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:22:27.052619  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:27.053060  216336 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 22:22:27.053134  216336 kubeadm.go:602] duration metric: took 4m8.100180752s to restartPrimaryControlPlane
	W1119 22:22:27.053205  216336 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1119 22:22:27.053270  216336 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1119 22:22:29.595586  216336 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.542287897s)
	I1119 22:22:29.595659  216336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:22:29.613831  216336 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:29.624808  216336 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:29.624878  216336 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:29.634866  216336 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:29.634910  216336 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:29.634958  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:22:29.643935  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:29.643998  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:29.651967  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:22:29.661497  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:29.661562  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:29.671814  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:22:29.680395  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:29.680451  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:29.688981  216336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:22:29.697200  216336 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:29.697265  216336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:29.704874  216336 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:29.744838  216336 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:29.744909  216336 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:29.766513  216336 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:29.766576  216336 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:29.766615  216336 kubeadm.go:319] OS: Linux
	I1119 22:22:29.766720  216336 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:29.766817  216336 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:29.766949  216336 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:29.767034  216336 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:29.767118  216336 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:29.767204  216336 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:29.767301  216336 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:29.767401  216336 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:29.845500  216336 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:29.845633  216336 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:29.845758  216336 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	W1119 22:22:26.794071  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	W1119 22:22:28.794688  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:26.025597  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 ...
	I1119 22:22:26.025625  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6: {Name:mkd4a17b950761c17a5f1c485097fe70aeb7115f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.025780  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6 ...
	I1119 22:22:26.025793  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6: {Name:mkce196bc6f1621a4671932273b821505129c4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.025863  276591 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt.444ee3d6 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt
	I1119 22:22:26.025977  276591 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key.444ee3d6 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key
	I1119 22:22:26.026038  276591 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key
	I1119 22:22:26.026053  276591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt with IP's: []
	I1119 22:22:26.110242  276591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt ...
	I1119 22:22:26.110266  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt: {Name:mkd034f4ab2e71e3031349036ccdc11118b20207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.110421  276591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key ...
	I1119 22:22:26.110435  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key: {Name:mkc40aef0c1ddcba5bcb699a18bcc20385df9b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:26.110627  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:22:26.110662  276591 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:22:26.110670  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:22:26.110694  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:22:26.110715  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:22:26.110735  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:22:26.110776  276591 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:26.111384  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:22:26.130746  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:22:26.150827  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:22:26.169590  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:22:26.189660  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:22:26.209597  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:22:26.228793  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:22:26.247579  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/default-k8s-diff-port-409240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:22:26.266234  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:22:26.289361  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:22:26.311453  276591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:22:26.333771  276591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:22:26.348643  276591 ssh_runner.go:195] Run: openssl version
	I1119 22:22:26.355320  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:22:26.365196  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.369774  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.369837  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:26.407001  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:22:26.416898  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:22:26.426385  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.431341  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.431409  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:22:26.468064  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:22:26.478073  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:22:26.487541  276591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.492229  276591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.492294  276591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:22:26.529743  276591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:22:26.539856  276591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:22:26.544020  276591 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:22:26.544094  276591 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409240 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:26.544240  276591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:22:26.544323  276591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:22:26.575244  276591 cri.go:89] found id: ""
	I1119 22:22:26.575301  276591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:22:26.584632  276591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:26.593649  276591 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:26.593719  276591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:26.603310  276591 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:26.603333  276591 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:26.603381  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:22:26.612040  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:26.612101  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:26.620326  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:22:26.630737  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:26.630811  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:26.639701  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:22:26.648722  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:26.648783  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:26.659210  276591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:22:26.667820  276591 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:26.667894  276591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:26.676498  276591 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:26.743561  276591 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:22:26.810970  276591 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:22:29.544819  280330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-982287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.529563131s)
	I1119 22:22:29.544856  280330 kic.go:203] duration metric: took 4.529754174s to extract preloaded images to volume ...
	W1119 22:22:29.544960  280330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:22:29.545008  280330 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:22:29.545056  280330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:22:29.612696  280330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-982287 --name newest-cni-982287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-982287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-982287 --network newest-cni-982287 --ip 192.168.85.2 --volume newest-cni-982287:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:22:29.949075  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Running}}
	I1119 22:22:29.969100  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:29.989712  280330 cli_runner.go:164] Run: docker exec newest-cni-982287 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:22:30.039140  280330 oci.go:144] the created container "newest-cni-982287" has a running status.
	I1119 22:22:30.039169  280330 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa...
	I1119 22:22:30.133567  280330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:22:30.163995  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:30.186505  280330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:22:30.186531  280330 kic_runner.go:114] Args: [docker exec --privileged newest-cni-982287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:22:30.238099  280330 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:22:30.260123  280330 machine.go:94] provisionDockerMachine start ...
	I1119 22:22:30.260253  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:30.289537  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:30.290026  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:30.290051  280330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:22:30.291294  280330 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33964->127.0.0.1:33088: read: connection reset by peer
	I1119 22:22:33.431758  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-982287
	
	I1119 22:22:33.431790  280330 ubuntu.go:182] provisioning hostname "newest-cni-982287"
	I1119 22:22:33.431854  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:33.453700  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:33.453982  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:33.453999  280330 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-982287 && echo "newest-cni-982287" | sudo tee /etc/hostname
	I1119 22:22:33.617167  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-982287
	
	I1119 22:22:33.617241  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:33.653151  280330 main.go:143] libmachine: Using SSH client type: native
	I1119 22:22:33.653455  280330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:22:33.653482  280330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-982287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-982287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-982287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:22:33.806363  280330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:22:33.806400  280330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:22:33.806428  280330 ubuntu.go:190] setting up certificates
	I1119 22:22:33.806442  280330 provision.go:84] configureAuth start
	I1119 22:22:33.806525  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:33.830568  280330 provision.go:143] copyHostCerts
	I1119 22:22:33.830645  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:22:33.830657  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:22:33.830744  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:22:33.830891  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:22:33.830904  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:22:33.830955  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:22:33.831091  280330 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:22:33.831103  280330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:22:33.831143  280330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:22:33.831238  280330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.newest-cni-982287 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-982287]
	W1119 22:22:31.293984  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	W1119 22:22:33.294755  271072 node_ready.go:57] node "embed-certs-299509" has "Ready":"False" status (will retry)
	I1119 22:22:33.793952  271072 node_ready.go:49] node "embed-certs-299509" is "Ready"
	I1119 22:22:33.794000  271072 node_ready.go:38] duration metric: took 11.5033648s for node "embed-certs-299509" to be "Ready" ...
	I1119 22:22:33.794017  271072 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:22:33.794073  271072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:22:33.810709  271072 api_server.go:72] duration metric: took 11.915955391s to wait for apiserver process to appear ...
	I1119 22:22:33.810742  271072 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:22:33.810771  271072 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:22:33.815794  271072 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 22:22:33.817268  271072 api_server.go:141] control plane version: v1.34.1
	I1119 22:22:33.817298  271072 api_server.go:131] duration metric: took 6.547094ms to wait for apiserver health ...
	I1119 22:22:33.817307  271072 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:22:33.821284  271072 system_pods.go:59] 8 kube-system pods found
	I1119 22:22:33.821325  271072 system_pods.go:61] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:33.821335  271072 system_pods.go:61] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:33.821344  271072 system_pods.go:61] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:33.821351  271072 system_pods.go:61] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:33.821358  271072 system_pods.go:61] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:33.821362  271072 system_pods.go:61] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:33.821365  271072 system_pods.go:61] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:33.821373  271072 system_pods.go:61] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:33.821385  271072 system_pods.go:74] duration metric: took 4.070526ms to wait for pod list to return data ...
	I1119 22:22:33.821399  271072 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:22:33.825243  271072 default_sa.go:45] found service account: "default"
	I1119 22:22:33.825273  271072 default_sa.go:55] duration metric: took 3.86707ms for default service account to be created ...
	I1119 22:22:33.825463  271072 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:22:33.828586  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:33.828618  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:33.828627  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:33.828634  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:33.828640  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:33.828646  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:33.828651  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:33.828657  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:33.828665  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:33.828695  271072 retry.go:31] will retry after 283.925445ms: missing components: kube-dns
	I1119 22:22:34.118106  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.118143  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.118150  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.118156  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.118160  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.118164  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.118167  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.118170  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.118175  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:34.118188  271072 retry.go:31] will retry after 317.330113ms: missing components: kube-dns
	I1119 22:22:34.439211  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.439242  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.439248  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.439260  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.439264  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.439270  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.439274  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.439279  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.439287  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:22:34.439308  271072 retry.go:31] will retry after 343.185922ms: missing components: kube-dns
	I1119 22:22:34.787502  271072 system_pods.go:86] 8 kube-system pods found
	I1119 22:22:34.787548  271072 system_pods.go:89] "coredns-66bc5c9577-dmd59" [2c555b78-b464-40e7-be35-c2b2286321ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:22:34.787559  271072 system_pods.go:89] "etcd-embed-certs-299509" [a216afec-b3af-407e-8bd6-bee515ee12ac] Running
	I1119 22:22:34.787569  271072 system_pods.go:89] "kindnet-st248" [0f41e3e9-9f6f-4f28-a037-373cc6996455] Running
	I1119 22:22:34.787575  271072 system_pods.go:89] "kube-apiserver-embed-certs-299509" [e827ee34-1837-42e1-8e2e-85d36aa7ed0d] Running
	I1119 22:22:34.787582  271072 system_pods.go:89] "kube-controller-manager-embed-certs-299509" [d2d95ea4-394e-408c-96bd-dfd229552da3] Running
	I1119 22:22:34.787586  271072 system_pods.go:89] "kube-proxy-b7gxk" [fc0c848b-ceac-4473-8a9f-42665ee25a5b] Running
	I1119 22:22:34.787591  271072 system_pods.go:89] "kube-scheduler-embed-certs-299509" [da7c1834-2bff-467c-9a29-7c351eea9e13] Running
	I1119 22:22:34.787596  271072 system_pods.go:89] "storage-provisioner" [87ae0335-b9d0-4969-8fd0-febca42399e1] Running
	I1119 22:22:34.787606  271072 system_pods.go:126] duration metric: took 962.135619ms to wait for k8s-apps to be running ...
	I1119 22:22:34.787616  271072 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:22:34.787667  271072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:22:34.806841  271072 system_svc.go:56] duration metric: took 19.214772ms WaitForService to wait for kubelet
	I1119 22:22:34.806915  271072 kubeadm.go:587] duration metric: took 12.912189637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:22:34.806938  271072 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:22:34.810623  271072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:22:34.810654  271072 node_conditions.go:123] node cpu capacity is 8
	I1119 22:22:34.810671  271072 node_conditions.go:105] duration metric: took 3.728429ms to run NodePressure ...
	I1119 22:22:34.810685  271072 start.go:242] waiting for startup goroutines ...
	I1119 22:22:34.810694  271072 start.go:247] waiting for cluster config update ...
	I1119 22:22:34.810707  271072 start.go:256] writing updated cluster config ...
	I1119 22:22:34.811047  271072 ssh_runner.go:195] Run: rm -f paused
	I1119 22:22:34.816231  271072 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:22:34.820920  271072 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dmd59" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.826183  271072 pod_ready.go:94] pod "coredns-66bc5c9577-dmd59" is "Ready"
	I1119 22:22:34.826210  271072 pod_ready.go:86] duration metric: took 5.257551ms for pod "coredns-66bc5c9577-dmd59" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.828987  271072 pod_ready.go:83] waiting for pod "etcd-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.833480  271072 pod_ready.go:94] pod "etcd-embed-certs-299509" is "Ready"
	I1119 22:22:34.833506  271072 pod_ready.go:86] duration metric: took 4.492269ms for pod "etcd-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.836026  271072 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.840861  271072 pod_ready.go:94] pod "kube-apiserver-embed-certs-299509" is "Ready"
	I1119 22:22:34.840946  271072 pod_ready.go:86] duration metric: took 4.894896ms for pod "kube-apiserver-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:34.843228  271072 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.221427  271072 pod_ready.go:94] pod "kube-controller-manager-embed-certs-299509" is "Ready"
	I1119 22:22:35.221457  271072 pod_ready.go:86] duration metric: took 378.200798ms for pod "kube-controller-manager-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.421075  271072 pod_ready.go:83] waiting for pod "kube-proxy-b7gxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:35.820517  271072 pod_ready.go:94] pod "kube-proxy-b7gxk" is "Ready"
	I1119 22:22:35.820542  271072 pod_ready.go:86] duration metric: took 399.44003ms for pod "kube-proxy-b7gxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.022309  271072 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.421735  271072 pod_ready.go:94] pod "kube-scheduler-embed-certs-299509" is "Ready"
	I1119 22:22:36.421766  271072 pod_ready.go:86] duration metric: took 399.426239ms for pod "kube-scheduler-embed-certs-299509" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:22:36.421782  271072 pod_ready.go:40] duration metric: took 1.605467692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:22:36.482197  271072 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:22:36.483810  271072 out.go:179] * Done! kubectl is now configured to use "embed-certs-299509" cluster and "default" namespace by default
	I1119 22:22:34.707484  280330 provision.go:177] copyRemoteCerts
	I1119 22:22:34.707566  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:22:34.707606  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:34.725423  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:34.826470  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:22:34.856115  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:22:34.879845  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:22:34.905572  280330 provision.go:87] duration metric: took 1.099102161s to configureAuth
	I1119 22:22:34.905604  280330 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:22:34.906171  280330 config.go:182] Loaded profile config "newest-cni-982287": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:34.906210  280330 machine.go:97] duration metric: took 4.64606146s to provisionDockerMachine
	I1119 22:22:34.906220  280330 client.go:176] duration metric: took 10.511385903s to LocalClient.Create
	I1119 22:22:34.906250  280330 start.go:167] duration metric: took 10.511463988s to libmachine.API.Create "newest-cni-982287"
	I1119 22:22:34.906263  280330 start.go:293] postStartSetup for "newest-cni-982287" (driver="docker")
	I1119 22:22:34.906275  280330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:22:34.906335  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:22:34.906379  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:34.931452  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.040926  280330 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:22:35.045946  280330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:22:35.045989  280330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:22:35.046003  280330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:22:35.046060  280330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:22:35.046153  280330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:22:35.046274  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:22:35.056558  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:35.084815  280330 start.go:296] duration metric: took 178.536573ms for postStartSetup
	I1119 22:22:35.085278  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:35.110307  280330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/config.json ...
	I1119 22:22:35.110611  280330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:22:35.110657  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.135042  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.237701  280330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:22:35.243727  280330 start.go:128] duration metric: took 10.851573045s to createHost
	I1119 22:22:35.243757  280330 start.go:83] releasing machines lock for "newest-cni-982287", held for 10.851724024s
	I1119 22:22:35.243839  280330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-982287
	I1119 22:22:35.267661  280330 ssh_runner.go:195] Run: cat /version.json
	I1119 22:22:35.267708  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.267778  280330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:22:35.268212  280330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:22:35.292111  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.292342  280330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:22:35.467290  280330 ssh_runner.go:195] Run: systemctl --version
	I1119 22:22:35.474736  280330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:22:35.480229  280330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:22:35.480307  280330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:22:35.506774  280330 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:22:35.506798  280330 start.go:496] detecting cgroup driver to use...
	I1119 22:22:35.506827  280330 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:22:35.506865  280330 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:22:35.521633  280330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:22:35.534995  280330 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:22:35.535054  280330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:22:35.555622  280330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:22:35.573509  280330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:22:35.675129  280330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:22:35.771490  280330 docker.go:234] disabling docker service ...
	I1119 22:22:35.771557  280330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:22:35.791577  280330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:22:35.805345  280330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:22:35.893395  280330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:22:35.989514  280330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:22:36.006433  280330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:22:36.028122  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:22:36.042021  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:22:36.052163  280330 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:22:36.052246  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:22:36.062904  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:36.075204  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:22:36.087574  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:22:36.101783  280330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:22:36.110858  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:22:36.121164  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:22:36.132874  280330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:22:36.144559  280330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:22:36.154816  280330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:22:36.164165  280330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:36.271146  280330 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:22:36.418497  280330 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:22:36.418560  280330 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:22:36.424387  280330 start.go:564] Will wait 60s for crictl version
	I1119 22:22:36.424446  280330 ssh_runner.go:195] Run: which crictl
	I1119 22:22:36.429706  280330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:22:36.466769  280330 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:22:36.466921  280330 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:36.495723  280330 ssh_runner.go:195] Run: containerd --version
	I1119 22:22:36.527630  280330 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:22:36.529205  280330 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:22:36.552951  280330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:22:36.558500  280330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:36.575956  280330 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:22:36.991156  276591 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:36.991226  276591 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:36.991344  276591 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:36.991405  276591 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:36.991453  276591 kubeadm.go:319] OS: Linux
	I1119 22:22:36.991524  276591 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:36.991602  276591 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:36.991674  276591 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:36.991786  276591 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:36.991927  276591 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:36.992022  276591 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:36.992091  276591 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:36.992170  276591 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:36.992270  276591 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:36.992410  276591 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:36.992549  276591 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:22:36.992628  276591 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:36.994336  276591 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:36.994448  276591 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:36.994539  276591 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:36.994630  276591 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:22:36.994708  276591 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:22:36.994792  276591 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:22:36.994862  276591 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:22:36.995098  276591 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:22:36.995346  276591 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-409240 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:22:36.995425  276591 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:22:36.995644  276591 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-409240 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:22:36.995739  276591 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:22:36.995835  276591 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:22:36.995944  276591 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:22:36.996019  276591 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:36.996083  276591 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:36.996167  276591 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:36.996240  276591 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:36.996352  276591 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:36.996422  276591 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:36.996535  276591 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:36.996630  276591 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:36.998058  276591 out.go:252]   - Booting up control plane ...
	I1119 22:22:36.998164  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:36.998358  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:36.998492  276591 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:36.998683  276591 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:36.998845  276591 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:36.999001  276591 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:36.999127  276591 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:36.999233  276591 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:36.999428  276591 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:36.999581  276591 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:36.999680  276591 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.059271ms
	I1119 22:22:36.999921  276591 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:37.000056  276591 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1119 22:22:37.000195  276591 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:37.000314  276591 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:37.000421  276591 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.132785513s
	I1119 22:22:37.000532  276591 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.806845653s
	I1119 22:22:37.000619  276591 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002186224s
	I1119 22:22:37.000748  276591 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:22:37.000918  276591 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:22:37.000991  276591 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:22:37.001247  276591 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-409240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:22:37.001330  276591 kubeadm.go:319] [bootstrap-token] Using token: jt6zlp.9o8ngv3uv5w6cuhp
	I1119 22:22:37.003023  276591 out.go:252]   - Configuring RBAC rules ...
	I1119 22:22:37.003118  276591 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:22:37.003186  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:22:37.003306  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:22:37.003409  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:22:37.003506  276591 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:22:37.003574  276591 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:22:37.003667  276591 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:22:37.003702  276591 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:22:37.003739  276591 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:22:37.003742  276591 kubeadm.go:319] 
	I1119 22:22:37.003790  276591 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:22:37.003793  276591 kubeadm.go:319] 
	I1119 22:22:37.003856  276591 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:22:37.003859  276591 kubeadm.go:319] 
	I1119 22:22:37.003892  276591 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:22:37.003964  276591 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:22:37.004023  276591 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:22:37.004033  276591 kubeadm.go:319] 
	I1119 22:22:37.004094  276591 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:22:37.004099  276591 kubeadm.go:319] 
	I1119 22:22:37.004147  276591 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:22:37.004151  276591 kubeadm.go:319] 
	I1119 22:22:37.004214  276591 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:22:37.004319  276591 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:22:37.004407  276591 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:22:37.004414  276591 kubeadm.go:319] 
	I1119 22:22:37.004523  276591 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:22:37.004621  276591 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:22:37.004631  276591 kubeadm.go:319] 
	I1119 22:22:37.004735  276591 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token jt6zlp.9o8ngv3uv5w6cuhp \
	I1119 22:22:37.004929  276591 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:22:37.004991  276591 kubeadm.go:319] 	--control-plane 
	I1119 22:22:37.005007  276591 kubeadm.go:319] 
	I1119 22:22:37.005153  276591 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:22:37.005176  276591 kubeadm.go:319] 
	I1119 22:22:37.005312  276591 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token jt6zlp.9o8ngv3uv5w6cuhp \
	I1119 22:22:37.005533  276591 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:22:37.005580  276591 cni.go:84] Creating CNI manager for ""
	I1119 22:22:37.005601  276591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:37.007317  276591 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:22:38.064085  216336 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:36.577259  280330 kubeadm.go:884] updating cluster {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:22:36.577421  280330 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:22:36.577484  280330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:36.612845  280330 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:36.612874  280330 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:22:36.612983  280330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:22:36.648084  280330 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:22:36.648113  280330 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:22:36.648122  280330 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:22:36.648329  280330 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-982287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:22:36.648455  280330 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:22:36.695583  280330 cni.go:84] Creating CNI manager for ""
	I1119 22:22:36.695610  280330 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:36.695628  280330 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:22:36.695670  280330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-982287 NodeName:newest-cni-982287 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:22:36.695944  280330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-982287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:22:36.696033  280330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:22:36.706266  280330 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:22:36.706329  280330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:22:36.715401  280330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 22:22:36.730738  280330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:22:36.749872  280330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 22:22:36.765531  280330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:22:36.770080  280330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:22:36.782743  280330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:36.889055  280330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:36.923087  280330 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287 for IP: 192.168.85.2
	I1119 22:22:36.923112  280330 certs.go:195] generating shared ca certs ...
	I1119 22:22:36.923134  280330 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:36.923314  280330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:22:36.923364  280330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:22:36.923373  280330 certs.go:257] generating profile certs ...
	I1119 22:22:36.923440  280330 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key
	I1119 22:22:36.923461  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt with IP's: []
	I1119 22:22:37.181324  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt ...
	I1119 22:22:37.181356  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.crt: {Name:mkb01b8326784e66b7df5ab019ef6110c6c012ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.181574  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key ...
	I1119 22:22:37.181594  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key: {Name:mk51e15b78fbe125c718c897366ec099f68b0cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.181715  280330 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082
	I1119 22:22:37.181737  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:22:37.329761  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 ...
	I1119 22:22:37.329790  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082: {Name:mk8b10d40b3a22fe4e2dc15032ab661d54c098d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.330014  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082 ...
	I1119 22:22:37.330044  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082: {Name:mk3a61ee08cec0f79ada62e8fb29583cd21e7bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.330156  280330 certs.go:382] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt.9887c082 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt
	I1119 22:22:37.330250  280330 certs.go:386] copying /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082 -> /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key
	I1119 22:22:37.330322  280330 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key
	I1119 22:22:37.330338  280330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt with IP's: []
	I1119 22:22:37.771878  280330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt ...
	I1119 22:22:37.771917  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt: {Name:mk58761e0e6ee7737a83048777838b9aec8854a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.801265  280330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key ...
	I1119 22:22:37.801303  280330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key: {Name:mk552820dd03268dd56a26bab7595fafc18517aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:37.801622  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:22:37.801678  280330 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:22:37.801689  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:22:37.801717  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:22:37.801748  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:22:37.801779  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:22:37.801829  280330 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:22:37.802674  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:22:37.875734  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:22:37.894413  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:22:37.926996  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:22:37.947970  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:22:37.967508  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:22:37.988163  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:22:38.007952  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:22:38.026511  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:22:38.063287  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:22:38.082212  280330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:22:38.102014  280330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:22:38.115085  280330 ssh_runner.go:195] Run: openssl version
	I1119 22:22:38.121839  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:22:38.131648  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.135595  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.135658  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:22:38.172211  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:22:38.182532  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:22:38.191695  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.196453  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.196511  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:22:38.234306  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:22:38.244590  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:22:38.254043  280330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.258079  280330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.258138  280330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:22:38.295672  280330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:22:38.304737  280330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:22:38.308765  280330 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:22:38.308829  280330 kubeadm.go:401] StartCluster: {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:22:38.308934  280330 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:22:38.309011  280330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:22:38.338594  280330 cri.go:89] found id: ""
	I1119 22:22:38.338656  280330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:22:38.347771  280330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:22:38.356084  280330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:22:38.356152  280330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:22:38.364447  280330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:22:38.364471  280330 kubeadm.go:158] found existing configuration files:
	
	I1119 22:22:38.364519  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:22:38.372662  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:22:38.372725  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:22:38.380501  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:22:38.388307  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:22:38.388354  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:22:38.396282  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:22:38.403906  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:22:38.403965  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:22:38.411846  280330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:22:38.419497  280330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:22:38.419562  280330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:22:38.427419  280330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:22:38.471999  280330 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:22:38.472102  280330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:22:38.506349  280330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:22:38.506471  280330 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:22:38.506528  280330 kubeadm.go:319] OS: Linux
	I1119 22:22:38.506608  280330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:22:38.506687  280330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:22:38.506757  280330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:22:38.506827  280330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:22:38.506912  280330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:22:38.506978  280330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:22:38.507044  280330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:22:38.507104  280330 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:22:38.578369  280330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:22:38.578545  280330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:22:38.578669  280330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:22:38.583992  280330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:22:38.586974  280330 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:38.587060  280330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:38.587142  280330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:38.772077  280330 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:22:39.010675  280330 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:22:38.065634  216336 out.go:252]   - Generating certificates and keys ...
	I1119 22:22:38.065746  216336 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:22:38.065840  216336 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:22:38.065978  216336 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1119 22:22:38.066092  216336 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1119 22:22:38.066189  216336 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1119 22:22:38.066274  216336 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1119 22:22:38.066365  216336 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1119 22:22:38.066473  216336 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1119 22:22:38.066586  216336 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1119 22:22:38.066708  216336 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1119 22:22:38.066765  216336 kubeadm.go:319] [certs] Using the existing "sa" key
	I1119 22:22:38.066841  216336 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:38.249871  216336 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:38.510034  216336 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:38.953480  216336 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:39.188274  216336 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:39.320580  216336 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:39.321267  216336 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:39.324043  216336 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:39.326001  216336 out.go:252]   - Booting up control plane ...
	I1119 22:22:39.326149  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:39.326288  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:39.327083  216336 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:39.351951  216336 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:39.352138  216336 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:39.360710  216336 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:39.360981  216336 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:39.361053  216336 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:39.495517  216336 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:39.495708  216336 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:37.008922  276591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:22:37.014610  276591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:22:37.014639  276591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:22:37.031868  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:22:37.329119  276591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:22:37.329194  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:37.329194  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-409240 minikube.k8s.io/updated_at=2025_11_19T22_22_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-409240 minikube.k8s.io/primary=true
	I1119 22:22:37.342712  276591 ops.go:34] apiserver oom_adj: -16
	I1119 22:22:37.420487  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:37.921546  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:38.421061  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:38.921271  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.421135  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.920573  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:40.421572  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:40.921543  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:39.291147  280330 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:22:39.803989  280330 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:22:39.857608  280330 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:22:39.857805  280330 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-982287] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:22:40.046677  280330 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:22:40.047344  280330 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-982287] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:22:40.324316  280330 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:22:40.485707  280330 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:22:40.758234  280330 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:22:40.758548  280330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:22:40.887155  280330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:22:40.966155  280330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:22:41.277055  280330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:22:41.449006  280330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:22:41.880741  280330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:22:41.881629  280330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:22:41.887692  280330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:22:41.421113  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:41.921101  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:42.420622  276591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:42.522253  276591 kubeadm.go:1114] duration metric: took 5.193121012s to wait for elevateKubeSystemPrivileges
	I1119 22:22:42.522299  276591 kubeadm.go:403] duration metric: took 15.978207866s to StartCluster
	I1119 22:22:42.522329  276591 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:42.522413  276591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:22:42.524002  276591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:42.524286  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:22:42.524308  276591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:22:42.524370  276591 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:22:42.524471  276591 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-409240"
	I1119 22:22:42.524490  276591 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-409240"
	I1119 22:22:42.524493  276591 config.go:182] Loaded profile config "default-k8s-diff-port-409240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:42.524509  276591 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-409240"
	I1119 22:22:42.524557  276591 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-409240"
	I1119 22:22:42.524581  276591 host.go:66] Checking if "default-k8s-diff-port-409240" exists ...
	I1119 22:22:42.524977  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.525109  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.526483  276591 out.go:179] * Verifying Kubernetes components...
	I1119 22:22:42.528866  276591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:42.598272  276591 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-409240"
	I1119 22:22:42.598516  276591 host.go:66] Checking if "default-k8s-diff-port-409240" exists ...
	I1119 22:22:42.599725  276591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409240 --format={{.State.Status}}
	I1119 22:22:42.603372  276591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:22:42.604788  276591 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:42.604968  276591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:22:42.605059  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:42.635283  276591 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:42.635309  276591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:22:42.636085  276591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409240
	I1119 22:22:42.641698  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:42.672358  276591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/default-k8s-diff-port-409240/id_rsa Username:docker}
	I1119 22:22:42.680798  276591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:22:42.762621  276591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:42.807288  276591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:42.828033  276591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:42.981686  276591 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:22:42.983391  276591 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-409240" to be "Ready" ...
	I1119 22:22:43.217993  276591 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:22:41.891465  280330 out.go:252]   - Booting up control plane ...
	I1119 22:22:41.891604  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:22:41.891699  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:22:41.891777  280330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:22:41.910166  280330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:22:41.910315  280330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:22:41.917973  280330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:22:41.918251  280330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:22:41.918334  280330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:22:42.066811  280330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:22:42.066999  280330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:22:43.067875  280330 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001203567s
	I1119 22:22:43.071192  280330 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:43.071317  280330 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:22:43.071435  280330 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:43.071538  280330 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:40.497124  216336 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001690602s
	I1119 22:22:40.500502  216336 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:22:40.500633  216336 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 22:22:40.500767  216336 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:22:40.500906  216336 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:22:41.881178  216336 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.38034623s
	I1119 22:22:42.959692  216336 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.458959453s
	I1119 22:22:45.003285  216336 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501853641s
	I1119 22:22:45.016734  216336 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:22:45.033443  216336 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:22:45.043685  216336 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:22:45.044006  216336 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-133839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:22:45.053396  216336 kubeadm.go:319] [bootstrap-token] Using token: piifbg.8xlm8l44mj6waatg
	I1119 22:22:45.054943  216336 out.go:252]   - Configuring RBAC rules ...
	I1119 22:22:45.055104  216336 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:22:45.058820  216336 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:22:45.071434  216336 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:22:45.079105  216336 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:22:45.085948  216336 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:22:45.095427  216336 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:22:43.219437  276591 addons.go:515] duration metric: took 695.065243ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:22:43.494716  276591 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-409240" context rescaled to 1 replicas
	W1119 22:22:44.987411  276591 node_ready.go:57] node "default-k8s-diff-port-409240" has "Ready":"False" status (will retry)
	I1119 22:22:45.410637  216336 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:22:45.845934  216336 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:22:46.411630  216336 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:22:46.413082  216336 kubeadm.go:319] 
	I1119 22:22:46.413170  216336 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:22:46.413178  216336 kubeadm.go:319] 
	I1119 22:22:46.413278  216336 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:22:46.413285  216336 kubeadm.go:319] 
	I1119 22:22:46.413314  216336 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:22:46.413542  216336 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:22:46.413611  216336 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:22:46.413618  216336 kubeadm.go:319] 
	I1119 22:22:46.413677  216336 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:22:46.413682  216336 kubeadm.go:319] 
	I1119 22:22:46.413736  216336 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:22:46.413742  216336 kubeadm.go:319] 
	I1119 22:22:46.413818  216336 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:22:46.413950  216336 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:22:46.414032  216336 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:22:46.414039  216336 kubeadm.go:319] 
	I1119 22:22:46.414140  216336 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:22:46.414225  216336 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:22:46.414232  216336 kubeadm.go:319] 
	I1119 22:22:46.414331  216336 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token piifbg.8xlm8l44mj6waatg \
	I1119 22:22:46.414626  216336 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 \
	I1119 22:22:46.414711  216336 kubeadm.go:319] 	--control-plane 
	I1119 22:22:46.414724  216336 kubeadm.go:319] 
	I1119 22:22:46.414847  216336 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:22:46.414855  216336 kubeadm.go:319] 
	I1119 22:22:46.414980  216336 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token piifbg.8xlm8l44mj6waatg \
	I1119 22:22:46.415104  216336 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6123875ff628fb9eedbd72f2253477865aa197083b84a1d60cb6c00de308bc63 
	I1119 22:22:46.418675  216336 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:22:46.418818  216336 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:22:46.418844  216336 cni.go:84] Creating CNI manager for ""
	I1119 22:22:46.418853  216336 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:22:46.420937  216336 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:22:46.422212  216336 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:22:46.428306  216336 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:22:46.428333  216336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:22:46.448526  216336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:22:46.741514  216336 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:22:46.741662  216336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-133839 minikube.k8s.io/updated_at=2025_11_19T22_22_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=kubernetes-upgrade-133839 minikube.k8s.io/primary=true
	I1119 22:22:46.741849  216336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:22:46.764178  216336 ops.go:34] apiserver oom_adj: -16
	I1119 22:22:46.861946  216336 kubeadm.go:1114] duration metric: took 120.368499ms to wait for elevateKubeSystemPrivileges
	I1119 22:22:46.861978  216336 kubeadm.go:403] duration metric: took 4m27.980717292s to StartCluster
	I1119 22:22:46.861994  216336 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:46.862053  216336 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:22:46.864371  216336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:22:46.864695  216336 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:22:46.864796  216336 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:22:46.864948  216336 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-133839"
	I1119 22:22:46.864972  216336 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-133839"
	W1119 22:22:46.864980  216336 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:22:46.864975  216336 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-133839"
	I1119 22:22:46.865005  216336 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-133839"
	I1119 22:22:46.865009  216336 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:22:46.865016  216336 host.go:66] Checking if "kubernetes-upgrade-133839" exists ...
	I1119 22:22:46.865535  216336 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:22:46.865554  216336 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:22:46.866983  216336 out.go:179] * Verifying Kubernetes components...
	I1119 22:22:46.868487  216336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:22:46.892765  216336 kapi.go:59] client config for kubernetes-upgrade-133839: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key", CAFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:22:46.893129  216336 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:22:46.893133  216336 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-133839"
	I1119 22:22:45.127186  280330 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.055818516s
	I1119 22:22:45.288271  280330 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.216770602s
	I1119 22:22:47.073693  280330 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002119485s
	I1119 22:22:47.090904  280330 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:22:47.107867  280330 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:22:47.125573  280330 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:22:47.126056  280330 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-982287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:22:47.140096  280330 kubeadm.go:319] [bootstrap-token] Using token: nawamx.at28pr9lsma1zqur
	W1119 22:22:46.893150  216336 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:22:46.893244  216336 host.go:66] Checking if "kubernetes-upgrade-133839" exists ...
	I1119 22:22:46.893678  216336 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:22:46.894588  216336 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:46.894611  216336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:22:46.894667  216336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-133839
	I1119 22:22:46.924171  216336 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:46.924199  216336 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:22:46.924259  216336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-133839
	I1119 22:22:46.926386  216336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/kubernetes-upgrade-133839/id_rsa Username:docker}
	I1119 22:22:46.946946  216336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/kubernetes-upgrade-133839/id_rsa Username:docker}
	I1119 22:22:47.011279  216336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:22:47.028127  216336 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:22:47.028195  216336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:22:47.035198  216336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:22:47.042108  216336 api_server.go:72] duration metric: took 177.373338ms to wait for apiserver process to appear ...
	I1119 22:22:47.042137  216336 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:22:47.042160  216336 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:22:47.047781  216336 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:22:47.054963  216336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:22:47.056399  216336 api_server.go:141] control plane version: v1.34.1
	I1119 22:22:47.056430  216336 api_server.go:131] duration metric: took 14.28437ms to wait for apiserver health ...
	I1119 22:22:47.056440  216336 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:22:47.060286  216336 system_pods.go:59] 4 kube-system pods found
	I1119 22:22:47.060316  216336 system_pods.go:61] "etcd-kubernetes-upgrade-133839" [4b7e70b8-bddd-4dd9-8bdf-8dd86b3aa490] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:22:47.060324  216336 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-133839" [ba70c25d-5c49-416e-a497-183ff447e341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:22:47.060334  216336 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-133839" [92521eba-8aef-4ff7-8792-c8afdbb49ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:22:47.060340  216336 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-133839" [697d39c2-e8e1-4e8b-b712-ad9e4562393a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:22:47.060346  216336 system_pods.go:74] duration metric: took 3.900625ms to wait for pod list to return data ...
	I1119 22:22:47.060370  216336 kubeadm.go:587] duration metric: took 195.639955ms to wait for: map[apiserver:true system_pods:true]
	I1119 22:22:47.060384  216336 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:22:47.063340  216336 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:22:47.063364  216336 node_conditions.go:123] node cpu capacity is 8
	I1119 22:22:47.063375  216336 node_conditions.go:105] duration metric: took 2.986684ms to run NodePressure ...
	I1119 22:22:47.063388  216336 start.go:242] waiting for startup goroutines ...
	I1119 22:22:47.388039  216336 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:22:47.389070  216336 addons.go:515] duration metric: took 524.277696ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:22:47.389117  216336 start.go:247] waiting for cluster config update ...
	I1119 22:22:47.389130  216336 start.go:256] writing updated cluster config ...
	I1119 22:22:47.389420  216336 ssh_runner.go:195] Run: rm -f paused
	I1119 22:22:47.449312  216336 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:22:47.451392  216336 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-133839" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d8328ecf166b5       56cc512116c8f       8 seconds ago       Running             busybox                   0                   eb1640aaa1d2a       busybox                                      default
	c978f4fb9e859       52546a367cc9e       14 seconds ago      Running             coredns                   0                   bacdff11be525       coredns-66bc5c9577-dmd59                     kube-system
	ef38515030366       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   764b37d3ae033       storage-provisioner                          kube-system
	f9ca6afe443ef       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   32897e54ab6f4       kindnet-st248                                kube-system
	e99f92f9441eb       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   cee5ddf5f99fa       kube-proxy-b7gxk                             kube-system
	979e5f09853d6       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   e1f4dc800232c       kube-scheduler-embed-certs-299509            kube-system
	00b0e185d9e85       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   6d621fcd51faa       kube-controller-manager-embed-certs-299509   kube-system
	cc73d1161063d       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   bc6d35a584260       kube-apiserver-embed-certs-299509            kube-system
	32d631be779d9       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   7979510379921       etcd-embed-certs-299509                      kube-system
	
	
	==> containerd <==
	Nov 19 22:22:33 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:33.973974130Z" level=info msg="connecting to shim ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c" address="unix:///run/containerd/s/eaf2199e68c707984e371883acb9b067310c430ccd07b64261836ee8335f62d1" protocol=ttrpc version=3
	Nov 19 22:22:33 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:33.997465041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmd59,Uid:2c555b78-b464-40e7-be35-c2b2286321ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.016381792Z" level=info msg="CreateContainer within sandbox \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.026048816Z" level=info msg="Container c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.032584126Z" level=info msg="StartContainer for \"ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c\" returns successfully"
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.035922910Z" level=info msg="CreateContainer within sandbox \"bacdff11be5256a18280e514962c346368fdba45d149bfa15204be99dd6e5321\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.036530808Z" level=info msg="StartContainer for \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\""
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.037558106Z" level=info msg="connecting to shim c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82" address="unix:///run/containerd/s/76edba81499667cae3998dc46fe4ad9fce3bbf71d4176b1c0b966787c64dd424" protocol=ttrpc version=3
	Nov 19 22:22:34 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:34.095375936Z" level=info msg="StartContainer for \"c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82\" returns successfully"
	Nov 19 22:22:36 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:36.990781626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bb1fff85-0367-4004-a462-e99ccd3ceeb3,Namespace:default,Attempt:0,}"
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.035902567Z" level=info msg="connecting to shim eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57" address="unix:///run/containerd/s/81defd07493ca4a9062a6edc54a757892e618deedef99502bebefb1276a5ff57" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.126605399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bb1fff85-0367-4004-a462-e99ccd3ceeb3,Namespace:default,Attempt:0,} returns sandbox id \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\""
	Nov 19 22:22:37 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:37.129653934Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.584163458Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.584917463Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.586325608Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588213148Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588582882Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.458858165s"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.588640335Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.592843046Z" level=info msg="CreateContainer within sandbox \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.600595370Z" level=info msg="Container d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.607485080Z" level=info msg="CreateContainer within sandbox \"eb1640aaa1d2a531e4230cd157f8801d6829e698b902af50555135d2c4d7bc57\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.608187133Z" level=info msg="StartContainer for \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\""
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.609033179Z" level=info msg="connecting to shim d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac" address="unix:///run/containerd/s/81defd07493ca4a9062a6edc54a757892e618deedef99502bebefb1276a5ff57" protocol=ttrpc version=3
	Nov 19 22:22:39 embed-certs-299509 containerd[661]: time="2025-11-19T22:22:39.661255060Z" level=info msg="StartContainer for \"d8328ecf166b5f2c6edf429c7a3314bfd5420b5254c0bd7f3f9640a603a02bac\" returns successfully"
	
	
	==> coredns [c978f4fb9e8596decd6645bac1416185c389b884e1d3bcb98086559c4fb2ea82] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52135 - 57985 "HINFO IN 5202741956818390714.7152830120362697649. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02797357s
	
	
	==> describe nodes <==
	Name:               embed-certs-299509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-299509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-299509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_22_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-299509
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:22:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:22:47 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:22:47 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:22:47 +0000   Wed, 19 Nov 2025 22:22:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:22:47 +0000   Wed, 19 Nov 2025 22:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-299509
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                89d06ccb-f6da-4042-90eb-6aa22f98b648
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-dmd59                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-299509                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-st248                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-299509             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-299509    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-b7gxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-299509             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node embed-certs-299509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node embed-certs-299509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node embed-certs-299509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node embed-certs-299509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-299509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node embed-certs-299509 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-299509 event: Registered Node embed-certs-299509 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-299509 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [32d631be779d9fe68adb18531d9e8cc4ca0f6f57219fa3343bca45c04f81b0f6] <==
	{"level":"warn","ts":"2025-11-19T22:22:21.270385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.706956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T22:22:21.270455Z","caller":"traceutil/trace.go:172","msg":"trace[680321887] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:329; }","duration":"227.803365ms","start":"2025-11-19T22:22:21.042637Z","end":"2025-11-19T22:22:21.270441Z","steps":["trace[680321887] 'agreement among raft nodes before linearized reading'  (duration: 101.99388ms)","trace[680321887] 'range keys from in-memory index tree'  (duration: 125.627346ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.270394Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.65563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766285565613889 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-299509.1879889df1c3cc39\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/embed-certs-299509.1879889df1c3cc39\" value_size:621 lease:6571766285565613236 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:22:21.270643Z","caller":"traceutil/trace.go:172","msg":"trace[1101998133] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"228.533731ms","start":"2025-11-19T22:22:21.042094Z","end":"2025-11-19T22:22:21.270628Z","steps":["trace[1101998133] 'process raft request'  (duration: 102.591256ms)","trace[1101998133] 'compare'  (duration: 125.534963ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:21.271605Z","caller":"traceutil/trace.go:172","msg":"trace[902524464] linearizableReadLoop","detail":"{readStateIndex:340; appliedIndex:340; }","duration":"126.985068ms","start":"2025-11-19T22:22:21.144605Z","end":"2025-11-19T22:22:21.271591Z","steps":["trace[902524464] 'read index received'  (duration: 126.962851ms)","trace[902524464] 'applied index is now lower than readState.Index'  (duration: 20.876µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.271774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.630867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T22:22:21.271818Z","caller":"traceutil/trace.go:172","msg":"trace[595777696] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:331; }","duration":"228.685668ms","start":"2025-11-19T22:22:21.043122Z","end":"2025-11-19T22:22:21.271808Z","steps":["trace[595777696] 'agreement among raft nodes before linearized reading'  (duration: 228.556693ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.271839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.680356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-19T22:22:21.271909Z","caller":"traceutil/trace.go:172","msg":"trace[503953989] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"222.810074ms","start":"2025-11-19T22:22:21.049087Z","end":"2025-11-19T22:22:21.271897Z","steps":["trace[503953989] 'process raft request'  (duration: 222.504014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.271777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.150751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-11-19T22:22:21.271943Z","caller":"traceutil/trace.go:172","msg":"trace[1943891992] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:331; }","duration":"134.80824ms","start":"2025-11-19T22:22:21.137118Z","end":"2025-11-19T22:22:21.271926Z","steps":["trace[1943891992] 'agreement among raft nodes before linearized reading'  (duration: 134.486718ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.271963Z","caller":"traceutil/trace.go:172","msg":"trace[369689261] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:331; }","duration":"185.33093ms","start":"2025-11-19T22:22:21.086608Z","end":"2025-11-19T22:22:21.271939Z","steps":["trace[369689261] 'agreement among raft nodes before linearized reading'  (duration: 185.04183ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272177Z","caller":"traceutil/trace.go:172","msg":"trace[1590633831] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"222.289369ms","start":"2025-11-19T22:22:21.049877Z","end":"2025-11-19T22:22:21.272166Z","steps":["trace[1590633831] 'process raft request'  (duration: 222.100441ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272178Z","caller":"traceutil/trace.go:172","msg":"trace[423127262] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"184.016417ms","start":"2025-11-19T22:22:21.088148Z","end":"2025-11-19T22:22:21.272165Z","steps":["trace[423127262] 'process raft request'  (duration: 183.91276ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.272502Z","caller":"traceutil/trace.go:172","msg":"trace[1537717388] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"223.136566ms","start":"2025-11-19T22:22:21.049345Z","end":"2025-11-19T22:22:21.272481Z","steps":["trace[1537717388] 'process raft request'  (duration: 222.337069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.379297Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.536498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:1 size:2313"}
	{"level":"info","ts":"2025-11-19T22:22:21.379372Z","caller":"traceutil/trace.go:172","msg":"trace[1561772185] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:334; }","duration":"100.624296ms","start":"2025-11-19T22:22:21.278732Z","end":"2025-11-19T22:22:21.379356Z","steps":["trace[1561772185] 'agreement among raft nodes before linearized reading'  (duration: 94.263188ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:21.379526Z","caller":"traceutil/trace.go:172","msg":"trace[1771335165] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"100.790863ms","start":"2025-11-19T22:22:21.278716Z","end":"2025-11-19T22:22:21.379507Z","steps":["trace[1771335165] 'process raft request'  (duration: 94.244352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:21.662980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.756192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T22:22:21.663157Z","caller":"traceutil/trace.go:172","msg":"trace[1223092418] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:336; }","duration":"175.948113ms","start":"2025-11-19T22:22:21.487187Z","end":"2025-11-19T22:22:21.663135Z","steps":["trace[1223092418] 'agreement among raft nodes before linearized reading'  (duration: 50.541951ms)","trace[1223092418] 'range keys from in-memory index tree'  (duration: 125.090203ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.663359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.320003ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766285565613905 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:335 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:3706 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:22:21.663430Z","caller":"traceutil/trace.go:172","msg":"trace[1411416401] linearizableReadLoop","detail":"{readStateIndex:347; appliedIndex:346; }","duration":"125.718571ms","start":"2025-11-19T22:22:21.537701Z","end":"2025-11-19T22:22:21.663420Z","steps":["trace[1411416401] 'read index received'  (duration: 13.54µs)","trace[1411416401] 'applied index is now lower than readState.Index'  (duration: 125.704264ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:21.663440Z","caller":"traceutil/trace.go:172","msg":"trace[1436096593] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"220.193377ms","start":"2025-11-19T22:22:21.443234Z","end":"2025-11-19T22:22:21.663427Z","steps":["trace[1436096593] 'process raft request'  (duration: 94.549625ms)","trace[1436096593] 'compare'  (duration: 125.198123ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:21.663916Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.271012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T22:22:21.663966Z","caller":"traceutil/trace.go:172","msg":"trace[1341852426] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:337; }","duration":"126.349526ms","start":"2025-11-19T22:22:21.537605Z","end":"2025-11-19T22:22:21.663954Z","steps":["trace[1341852426] 'agreement among raft nodes before linearized reading'  (duration: 125.85502ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:22:48 up  1:05,  0 user,  load average: 4.71, 3.71, 2.39
	Linux embed-certs-299509 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f9ca6afe443eff6c4850214548f3d18190831351081a79b8efcd17d4127265ff] <==
	I1119 22:22:23.242556       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:22:23.242837       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 22:22:23.243088       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:22:23.243116       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:22:23.243140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:22:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:22:23.446415       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:22:23.446818       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:22:23.447460       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:22:23.447656       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:22:23.847690       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:22:23.940641       1 metrics.go:72] Registering metrics
	I1119 22:22:23.941152       1 controller.go:711] "Syncing nftables rules"
	I1119 22:22:33.447997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:22:33.448068       1 main.go:301] handling current node
	I1119 22:22:43.446324       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 22:22:43.446356       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc73d1161063d4dbf9949d49375d5baccff72a3ad2ebde0910a448aad00cec6a] <==
	I1119 22:22:13.923253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:22:13.924730       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:13.925565       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:22:13.931751       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:13.931773       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:22:13.933127       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:22:14.110006       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:22:14.825149       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:22:14.829564       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:22:14.829585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:22:15.508150       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:22:15.553481       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:22:15.630445       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:22:15.641307       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 22:22:15.642614       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:22:15.648620       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:22:15.840777       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:22:16.707831       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:22:16.720509       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:22:16.734769       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:22:21.670622       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:22:21.746750       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:21.753846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:21.945379       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:22:44.787345       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:49832: use of closed network connection
	
	
	==> kube-controller-manager [00b0e185d9e859d3c703686e910cdc28f1a3c21c4ad84c64e9872f583393a56e] <==
	I1119 22:22:21.083714       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:22:21.083795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:22:21.083713       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:22:21.083855       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:22:21.083865       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:22:21.083893       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:22:21.089451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:22:21.089644       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:22:21.089972       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:22:21.090102       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:22:21.090669       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:22:21.090703       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:22:21.090780       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:21.090790       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:22:21.090800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:22:21.091857       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:22:21.091903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:22:21.092294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:21.093354       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:22:21.093600       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:22:21.096701       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:22:21.105422       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:21.117079       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:21.274253       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-299509" podCIDRs=["10.244.0.0/24"]
	I1119 22:22:36.043208       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e99f92f9441eba50aa579862f55732c0c715ab1185ac9b748a64f6597c21cd3e] <==
	I1119 22:22:22.733093       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:22:22.799926       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:22:22.901074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:22:22.901121       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 22:22:22.901279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:22:22.926594       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:22:22.926655       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:22:22.932355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:22:22.932783       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:22:22.932817       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:22:22.934682       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:22:22.934875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:22:22.934910       1 config.go:200] "Starting service config controller"
	I1119 22:22:22.934945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:22:22.934964       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:22:22.934974       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:22:22.935006       1 config.go:309] "Starting node config controller"
	I1119 22:22:22.935013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:22:23.035756       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:22:23.035802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:22:23.035774       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:22:23.035800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [979e5f09853d673db16f4f5458dd6ef974350045a743532dfe09873fb6a44243] <==
	E1119 22:22:13.870348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:13.870485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:13.870488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:22:13.870577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:22:13.870831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:13.870856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:13.870871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:13.871013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:13.871294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:22:14.693069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:14.704384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:14.705402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:22:14.750865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:22:14.840749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:14.847092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:14.895857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:22:14.972872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:14.989326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:22:15.054782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:22:15.062386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:22:15.078092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:15.152679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:15.254176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:15.271651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:22:16.967246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.655664    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-299509" podStartSLOduration=1.655639949 podStartE2EDuration="1.655639949s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.642862964 +0000 UTC m=+1.171882551" watchObservedRunningTime="2025-11-19 22:22:17.655639949 +0000 UTC m=+1.184659537"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.672191    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-299509" podStartSLOduration=1.6721652169999999 podStartE2EDuration="1.672165217s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.655838038 +0000 UTC m=+1.184857611" watchObservedRunningTime="2025-11-19 22:22:17.672165217 +0000 UTC m=+1.201184803"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.692583    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-299509" podStartSLOduration=1.6924579199999998 podStartE2EDuration="1.69245792s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.672742323 +0000 UTC m=+1.201761910" watchObservedRunningTime="2025-11-19 22:22:17.69245792 +0000 UTC m=+1.221477506"
	Nov 19 22:22:17 embed-certs-299509 kubelet[1460]: I1119 22:22:17.695109    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-299509" podStartSLOduration=1.695085562 podStartE2EDuration="1.695085562s" podCreationTimestamp="2025-11-19 22:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:17.693709128 +0000 UTC m=+1.222728722" watchObservedRunningTime="2025-11-19 22:22:17.695085562 +0000 UTC m=+1.224105149"
	Nov 19 22:22:21 embed-certs-299509 kubelet[1460]: I1119 22:22:21.276541    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:22:21 embed-certs-299509 kubelet[1460]: I1119 22:22:21.277398    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106241    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc0c848b-ceac-4473-8a9f-42665ee25a5b-kube-proxy\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106303    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc0c848b-ceac-4473-8a9f-42665ee25a5b-xtables-lock\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106326    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc0c848b-ceac-4473-8a9f-42665ee25a5b-lib-modules\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106357    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5mg\" (UniqueName: \"kubernetes.io/projected/fc0c848b-ceac-4473-8a9f-42665ee25a5b-kube-api-access-5t5mg\") pod \"kube-proxy-b7gxk\" (UID: \"fc0c848b-ceac-4473-8a9f-42665ee25a5b\") " pod="kube-system/kube-proxy-b7gxk"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106383    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-xtables-lock\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106408    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-cni-cfg\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106431    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f41e3e9-9f6f-4f28-a037-373cc6996455-lib-modules\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:22 embed-certs-299509 kubelet[1460]: I1119 22:22:22.106456    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5nw\" (UniqueName: \"kubernetes.io/projected/0f41e3e9-9f6f-4f28-a037-373cc6996455-kube-api-access-lg5nw\") pod \"kindnet-st248\" (UID: \"0f41e3e9-9f6f-4f28-a037-373cc6996455\") " pod="kube-system/kindnet-st248"
	Nov 19 22:22:23 embed-certs-299509 kubelet[1460]: I1119 22:22:23.634905    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7gxk" podStartSLOduration=2.6348579279999997 podStartE2EDuration="2.634857928s" podCreationTimestamp="2025-11-19 22:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:23.634459761 +0000 UTC m=+7.163479350" watchObservedRunningTime="2025-11-19 22:22:23.634857928 +0000 UTC m=+7.163877515"
	Nov 19 22:22:23 embed-certs-299509 kubelet[1460]: I1119 22:22:23.648626    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-st248" podStartSLOduration=2.648600591 podStartE2EDuration="2.648600591s" podCreationTimestamp="2025-11-19 22:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:23.648350989 +0000 UTC m=+7.177370575" watchObservedRunningTime="2025-11-19 22:22:23.648600591 +0000 UTC m=+7.177620178"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.504398    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.588726    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8272\" (UniqueName: \"kubernetes.io/projected/2c555b78-b464-40e7-be35-c2b2286321ab-kube-api-access-b8272\") pod \"coredns-66bc5c9577-dmd59\" (UID: \"2c555b78-b464-40e7-be35-c2b2286321ab\") " pod="kube-system/coredns-66bc5c9577-dmd59"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589040    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/87ae0335-b9d0-4969-8fd0-febca42399e1-tmp\") pod \"storage-provisioner\" (UID: \"87ae0335-b9d0-4969-8fd0-febca42399e1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589078    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c555b78-b464-40e7-be35-c2b2286321ab-config-volume\") pod \"coredns-66bc5c9577-dmd59\" (UID: \"2c555b78-b464-40e7-be35-c2b2286321ab\") " pod="kube-system/coredns-66bc5c9577-dmd59"
	Nov 19 22:22:33 embed-certs-299509 kubelet[1460]: I1119 22:22:33.589105    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfsd\" (UniqueName: \"kubernetes.io/projected/87ae0335-b9d0-4969-8fd0-febca42399e1-kube-api-access-wgfsd\") pod \"storage-provisioner\" (UID: \"87ae0335-b9d0-4969-8fd0-febca42399e1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:34 embed-certs-299509 kubelet[1460]: I1119 22:22:34.735707    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dmd59" podStartSLOduration=12.735683653 podStartE2EDuration="12.735683653s" podCreationTimestamp="2025-11-19 22:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:34.735414493 +0000 UTC m=+18.264434080" watchObservedRunningTime="2025-11-19 22:22:34.735683653 +0000 UTC m=+18.264703241"
	Nov 19 22:22:34 embed-certs-299509 kubelet[1460]: I1119 22:22:34.792568    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.792544284 podStartE2EDuration="12.792544284s" podCreationTimestamp="2025-11-19 22:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:34.777928592 +0000 UTC m=+18.306948393" watchObservedRunningTime="2025-11-19 22:22:34.792544284 +0000 UTC m=+18.321563871"
	Nov 19 22:22:36 embed-certs-299509 kubelet[1460]: I1119 22:22:36.717940    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxp6\" (UniqueName: \"kubernetes.io/projected/bb1fff85-0367-4004-a462-e99ccd3ceeb3-kube-api-access-dgxp6\") pod \"busybox\" (UID: \"bb1fff85-0367-4004-a462-e99ccd3ceeb3\") " pod="default/busybox"
	Nov 19 22:22:39 embed-certs-299509 kubelet[1460]: I1119 22:22:39.675004    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.213846338 podStartE2EDuration="3.674985904s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="2025-11-19 22:22:37.128554557 +0000 UTC m=+20.657574128" lastFinishedPulling="2025-11-19 22:22:39.589694129 +0000 UTC m=+23.118713694" observedRunningTime="2025-11-19 22:22:39.67448623 +0000 UTC m=+23.203505818" watchObservedRunningTime="2025-11-19 22:22:39.674985904 +0000 UTC m=+23.204005490"
	
	
	==> storage-provisioner [ef38515030366beb6fccedae1c7aa65324258714b98b367dda9dc76ee5b5d50c] <==
	I1119 22:22:34.043729       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:22:34.054355       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:22:34.054397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:22:34.056832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:34.062389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:34.062655       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:22:34.062785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3efdb2d9-5b85-4442-8675-f3018b942da4", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1 became leader
	I1119 22:22:34.062900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1!
	W1119 22:22:34.065632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:34.068997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:34.163624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-299509_3b4150a4-9140-4907-b679-5423aa10fdf1!
	W1119 22:22:36.073007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:36.078611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:38.082224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:38.086304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:40.089677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:40.095726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:42.100265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:42.105517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:44.110538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:44.115648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:46.119456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:46.124346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:48.127775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:48.133236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299509 -n embed-certs-299509
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-299509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (12.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eee884c0-8976-48f4-8b93-86a4bc150754] Pending
helpers_test.go:352: "busybox" [eee884c0-8976-48f4-8b93-86a4bc150754] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eee884c0-8976-48f4-8b93-86a4bc150754] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004135083s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-409240
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-409240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917",
	        "Created": "2025-11-19T22:22:21.77385695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278622,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:22:21.815590779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/hosts",
	        "LogPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917-json.log",
	        "Name": "/default-k8s-diff-port-409240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-409240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-409240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917",
	                "LowerDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-409240",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-409240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-409240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-409240",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-409240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e68c02bb9ea3360a612f1134ddaf57df5b051c02e3ef2cfb13033f5b87534b1e",
	            "SandboxKey": "/var/run/docker/netns/e68c02bb9ea3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-409240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a65c679966f5fdb88a7dd03de3ed6928298f7cf3afd6677cb80dabeb6ed9ab1f",
	                    "EndpointID": "57ff73ba7b4ad1304fa06523327a71a8daf478da794015e51f865c70924e7297",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "a2:f6:e1:37:94:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-409240",
	                        "3b17c5bd31e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-409240 logs -n 25: (1.427046707s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ unpause │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:22 UTC │
	│ image   │ no-preload-638439 image list --format=json                                                                                                                                                                                                          │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ pause   │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ unpause │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p disable-driver-mounts-837642                                                                                                                                                                                                                     │ disable-driver-mounts-837642 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409240 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p cert-expiration-207460                                                                                                                                                                                                                           │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-133839    │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-133839    │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-299509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ stop    │ -p embed-certs-299509 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-982287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ stop    │ -p newest-cni-982287 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-982287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-299509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:23 UTC │ 19 Nov 25 22:23 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:23:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:23:03.157576  291097 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:23:03.158164  291097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:03.158175  291097 out.go:374] Setting ErrFile to fd 2...
	I1119 22:23:03.158189  291097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:03.160681  291097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:23:03.161677  291097 out.go:368] Setting JSON to false
	I1119 22:23:03.163195  291097 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3923,"bootTime":1763587060,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:23:03.163350  291097 start.go:143] virtualization: kvm guest
	I1119 22:23:03.165935  291097 out.go:179] * [embed-certs-299509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:23:03.168168  291097 notify.go:221] Checking for updates...
	I1119 22:23:03.168751  291097 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:23:03.170450  291097 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:23:03.172318  291097 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:03.173692  291097 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:23:03.174947  291097 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:23:03.176377  291097 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:23:03.178127  291097 config.go:182] Loaded profile config "embed-certs-299509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:03.178793  291097 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:23:03.210556  291097 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:23:03.210658  291097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:23:03.292238  291097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:23:03.279121572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:23:03.292398  291097 docker.go:319] overlay module found
	I1119 22:23:03.294154  291097 out.go:179] * Using the docker driver based on existing profile
	I1119 22:23:03.295345  291097 start.go:309] selected driver: docker
	I1119 22:23:03.295362  291097 start.go:930] validating driver "docker" against &{Name:embed-certs-299509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-299509 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:03.295472  291097 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:23:03.296267  291097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:23:03.387380  291097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:23:03.374795628 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:23:03.387761  291097 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:23:03.387801  291097 cni.go:84] Creating CNI manager for ""
	I1119 22:23:03.387863  291097 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:03.387986  291097 start.go:353] cluster config:
	{Name:embed-certs-299509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-299509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:03.389793  291097 out.go:179] * Starting "embed-certs-299509" primary control-plane node in "embed-certs-299509" cluster
	I1119 22:23:03.391334  291097 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:23:03.392412  291097 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:23:03.393463  291097 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:23:03.393516  291097 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 22:23:03.393534  291097 cache.go:65] Caching tarball of preloaded images
	I1119 22:23:03.393632  291097 preload.go:238] Found /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 22:23:03.393653  291097 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:23:03.393785  291097 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/embed-certs-299509/config.json ...
	I1119 22:23:03.394071  291097 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:23:03.426146  291097 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:23:03.426190  291097 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:23:03.426206  291097 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:23:03.426246  291097 start.go:360] acquireMachinesLock for embed-certs-299509: {Name:mk01324288749056d93755268e5197a67d733c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:23:03.426308  291097 start.go:364] duration metric: took 38.25µs to acquireMachinesLock for "embed-certs-299509"
	I1119 22:23:03.426330  291097 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:23:03.426343  291097 fix.go:54] fixHost starting: 
	I1119 22:23:03.426633  291097 cli_runner.go:164] Run: docker container inspect embed-certs-299509 --format={{.State.Status}}
	I1119 22:23:03.452005  291097 fix.go:112] recreateIfNeeded on embed-certs-299509: state=Stopped err=<nil>
	W1119 22:23:03.452048  291097 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:23:02.736661  289819 ssh_runner.go:195] Run: systemctl --version
	I1119 22:23:02.812191  289819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:23:02.818626  289819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:23:02.818718  289819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:23:02.828777  289819 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:23:02.828801  289819 start.go:496] detecting cgroup driver to use...
	I1119 22:23:02.828845  289819 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:23:02.828920  289819 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:23:02.853350  289819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:23:02.871869  289819 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:23:02.871949  289819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:23:02.893720  289819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:23:02.912464  289819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:23:03.036114  289819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:23:03.149726  289819 docker.go:234] disabling docker service ...
	I1119 22:23:03.149802  289819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:23:03.169470  289819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:23:03.188181  289819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:23:03.305350  289819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:23:03.439604  289819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:23:03.457497  289819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:23:03.476108  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:23:03.488537  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:23:03.501118  289819 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:23:03.501294  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:23:03.515209  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:03.528017  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:23:03.540494  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:03.552364  289819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:23:03.565022  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:23:03.576305  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:23:03.589496  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:23:03.603528  289819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:23:03.624005  289819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:23:03.635724  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:03.758525  289819 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:23:04.070301  289819 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:23:04.070393  289819 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:23:04.075986  289819 start.go:564] Will wait 60s for crictl version
	I1119 22:23:04.076174  289819 ssh_runner.go:195] Run: which crictl
	I1119 22:23:04.081168  289819 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:23:04.113615  289819 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:23:04.113679  289819 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:04.143037  289819 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:04.265688  289819 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:23:04.307205  289819 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:23:04.334262  289819 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:23:04.340151  289819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:23:04.356873  289819 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:23:04.358574  289819 kubeadm.go:884] updating cluster {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:23:04.358770  289819 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:23:04.358841  289819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:23:04.398139  289819 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:23:04.398162  289819 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:23:04.398229  289819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:23:04.434518  289819 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:23:04.434545  289819 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:23:04.434555  289819 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:23:04.434692  289819 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-982287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:23:04.434757  289819 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:23:04.473195  289819 cni.go:84] Creating CNI manager for ""
	I1119 22:23:04.473224  289819 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:04.473244  289819 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:23:04.473273  289819 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-982287 NodeName:newest-cni-982287 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:23:04.476476  289819 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-982287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:23:04.476589  289819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:23:04.489907  289819 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:23:04.489991  289819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:23:04.502074  289819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 22:23:04.519629  289819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:23:04.535867  289819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 22:23:04.552231  289819 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:23:04.556322  289819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:23:04.581469  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:04.693660  289819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:04.718231  289819 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287 for IP: 192.168.85.2
	I1119 22:23:04.718257  289819 certs.go:195] generating shared ca certs ...
	I1119 22:23:04.718280  289819 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:04.718419  289819 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:23:04.718456  289819 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:23:04.718466  289819 certs.go:257] generating profile certs ...
	I1119 22:23:04.718538  289819 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key
	I1119 22:23:04.718592  289819 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082
	I1119 22:23:04.718627  289819 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key
	I1119 22:23:04.718723  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:23:04.718762  289819 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:23:04.718772  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:23:04.718795  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:23:04.718816  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:23:04.718836  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:23:04.718873  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:23:04.719523  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:23:04.740425  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:23:04.767473  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:23:04.788643  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:23:04.812826  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:23:04.840949  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:23:04.874334  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:23:04.904591  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:23:04.934179  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:23:04.960987  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:23:04.986601  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:23:05.015180  289819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:23:05.034021  289819 ssh_runner.go:195] Run: openssl version
	I1119 22:23:05.043564  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:23:05.055163  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.060517  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.060591  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.110854  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:23:05.121131  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:23:05.132030  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.137183  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.137252  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.181980  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:23:05.190709  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:23:05.200151  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.204199  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.204246  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.241474  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:23:05.251448  289819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:23:05.256282  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:23:05.317457  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:23:05.376306  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:23:05.430481  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:23:05.489491  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:23:05.548292  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:23:05.609139  289819 kubeadm.go:401] StartCluster: {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:05.609282  289819 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:23:05.609365  289819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:23:05.648293  289819 cri.go:89] found id: "b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b"
	I1119 22:23:05.648314  289819 cri.go:89] found id: "a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4"
	I1119 22:23:05.648320  289819 cri.go:89] found id: "d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be"
	I1119 22:23:05.648324  289819 cri.go:89] found id: "b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c"
	I1119 22:23:05.648328  289819 cri.go:89] found id: "f9c3e3ecb27478acc04ca5aa5d95be81bb8e6c96ff78b5374efbedc73cdb29d6"
	I1119 22:23:05.648333  289819 cri.go:89] found id: "29e1f5f1cb1b99b3a66e14a972ea7feb7293705bd1f6592ee1778524e3d3123d"
	I1119 22:23:05.648337  289819 cri.go:89] found id: "40a972971a9ab382c5ce85f8e882c14361f78c8897c6eb4b30e506a82326e560"
	I1119 22:23:05.648351  289819 cri.go:89] found id: "5929321495932c0c7ce8e595969edf334477c61b5b8615fe0fb88171d9eab230"
	I1119 22:23:05.648355  289819 cri.go:89] found id: "b0ed6ae5b675bb949adc944d3cc7d20404c52d58c9b7941be15cb28ed46d31bd"
	I1119 22:23:05.648365  289819 cri.go:89] found id: "b7ca1178966cc3f9bd224b8a6e7f789c2cbc30d136613381476b117423f76175"
	I1119 22:23:05.648374  289819 cri.go:89] found id: ""
	I1119 22:23:05.648417  289819 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 22:23:05.676356  289819 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","pid":831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951/rootfs","created":"2025-11-19T22:23:05.33900885Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-982287_6e6c969e5deca3d08506d67c9b1d82a2","io.kubernetes.cri.sandbox-memory":"0","io.
kubernetes.cri.sandbox-name":"etcd-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6e6c969e5deca3d08506d67c9b1d82a2"},"owner":"root"},{"ociVersion":"1.2.1","id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051/rootfs","created":"2025-11-19T22:23:05.352104894Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","io.kubernetes.cri.sandbox-log-direct
ory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-982287_2df7730119fc52593f99444d647fbfb1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2df7730119fc52593f99444d647fbfb1"},"owner":"root"},{"ociVersion":"1.2.1","id":"a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4/rootfs","created":"2025-11-19T22:23:05.504621335Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"abd9d35da1e9bc4
6eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"71354f3089fb5cfec338c075a5031d58"},"owner":"root"},{"ociVersion":"1.2.1","id":"abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56/rootfs","created":"2025-11-19T22:23:05.356690028Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"abd9d35da1e9bc46eddaaa9fa4458
983d302ba75987e8150470cc30431c20a56","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-982287_71354f3089fb5cfec338c075a5031d58","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"71354f3089fb5cfec338c075a5031d58"},"owner":"root"},{"ociVersion":"1.2.1","id":"b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b","pid":971,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b/rootfs","created":"2025-11-19T22:23:05.506717423Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.
io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2df7730119fc52593f99444d647fbfb1"},"owner":"root"},{"ociVersion":"1.2.1","id":"b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","pid":816,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d/rootfs","created":"2025-11-19T22:23:05.334820545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"1
02","io.kubernetes.cri.sandbox-id":"b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-982287_ffae298dd0003cc89157416bc9023259","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ffae298dd0003cc89157416bc9023259"},"owner":"root"},{"ociVersion":"1.2.1","id":"b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c","pid":930,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c/rootfs","created":"2025-11-19T22:23:05.479173858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kube
rnetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6e6c969e5deca3d08506d67c9b1d82a2"},"owner":"root"},{"ociVersion":"1.2.1","id":"d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be","pid":941,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be/rootfs","created":"2025-11-19T22:23:05.482918643Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"b7d3e6db6d219fdcc3a94acf1d9a7
08d12414f6ce528902b27db6573c4dbe83d","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ffae298dd0003cc89157416bc9023259"},"owner":"root"}]
	I1119 22:23:05.676612  289819 cri.go:126] list returned 8 containers
	I1119 22:23:05.676636  289819 cri.go:129] container: {ID:0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951 Status:running}
	I1119 22:23:05.676655  289819 cri.go:131] skipping 0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951 - not in ps
	I1119 22:23:05.676666  289819 cri.go:129] container: {ID:1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051 Status:running}
	I1119 22:23:05.676674  289819 cri.go:131] skipping 1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051 - not in ps
	I1119 22:23:05.676679  289819 cri.go:129] container: {ID:a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4 Status:running}
	I1119 22:23:05.676687  289819 cri.go:135] skipping {a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4 running}: state = "running", want "paused"
	I1119 22:23:05.676698  289819 cri.go:129] container: {ID:abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56 Status:running}
	I1119 22:23:05.676705  289819 cri.go:131] skipping abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56 - not in ps
	I1119 22:23:05.676711  289819 cri.go:129] container: {ID:b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b Status:running}
	I1119 22:23:05.676717  289819 cri.go:135] skipping {b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b running}: state = "running", want "paused"
	I1119 22:23:05.676724  289819 cri.go:129] container: {ID:b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d Status:running}
	I1119 22:23:05.676730  289819 cri.go:131] skipping b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d - not in ps
	I1119 22:23:05.676735  289819 cri.go:129] container: {ID:b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c Status:running}
	I1119 22:23:05.676742  289819 cri.go:135] skipping {b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c running}: state = "running", want "paused"
	I1119 22:23:05.676747  289819 cri.go:129] container: {ID:d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be Status:running}
	I1119 22:23:05.676755  289819 cri.go:135] skipping {d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be running}: state = "running", want "paused"
	I1119 22:23:05.676804  289819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:23:05.687151  289819 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:23:05.687174  289819 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:23:05.687304  289819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:23:05.698213  289819 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:23:05.699325  289819 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-982287" does not appear in /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:05.700295  289819 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9296/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-982287" cluster setting kubeconfig missing "newest-cni-982287" context setting]
	I1119 22:23:05.701354  289819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.703694  289819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:23:05.716240  289819 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:23:05.716289  289819 kubeadm.go:602] duration metric: took 29.108218ms to restartPrimaryControlPlane
	I1119 22:23:05.716301  289819 kubeadm.go:403] duration metric: took 107.170746ms to StartCluster
	I1119 22:23:05.716321  289819 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.716382  289819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:05.717905  289819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.718189  289819 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:23:05.718408  289819 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:23:05.718507  289819 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-982287"
	I1119 22:23:05.718533  289819 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-982287"
	W1119 22:23:05.718540  289819 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:23:05.718572  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.718645  289819 addons.go:70] Setting default-storageclass=true in profile "newest-cni-982287"
	I1119 22:23:05.718684  289819 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-982287"
	I1119 22:23:05.718933  289819 config.go:182] Loaded profile config "newest-cni-982287": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:05.718988  289819 addons.go:70] Setting metrics-server=true in profile "newest-cni-982287"
	I1119 22:23:05.719000  289819 addons.go:239] Setting addon metrics-server=true in "newest-cni-982287"
	W1119 22:23:05.719008  289819 addons.go:248] addon metrics-server should already be in state true
	I1119 22:23:05.719031  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.719109  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.719314  289819 addons.go:70] Setting dashboard=true in profile "newest-cni-982287"
	I1119 22:23:05.719330  289819 addons.go:239] Setting addon dashboard=true in "newest-cni-982287"
	W1119 22:23:05.719337  289819 addons.go:248] addon dashboard should already be in state true
	I1119 22:23:05.719361  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.719498  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.719841  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.720090  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.729495  289819 out.go:179] * Verifying Kubernetes components...
	I1119 22:23:05.731962  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:05.758586  289819 addons.go:239] Setting addon default-storageclass=true in "newest-cni-982287"
	W1119 22:23:05.758895  289819 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:23:05.758927  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.759564  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.762523  289819 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:23:05.763367  289819 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:23:05.765322  289819 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:23:05.765529  289819 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:23:05.765549  289819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:23:05.765645  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.766612  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:23:05.766687  289819 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:23:05.766783  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.773182  289819 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 22:23:04.844028  286310 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (4.026589876s)
	I1119 22:23:04.844082  286310 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:23:04.844137  286310 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:23:04.844198  286310 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:23:05.144370  286310 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:23:05.144409  286310 cache_images.go:125] Successfully loaded all cached images
	I1119 22:23:05.144415  286310 cache_images.go:94] duration metric: took 11.177160398s to LoadCachedImages
	I1119 22:23:05.144429  286310 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1119 22:23:05.144559  286310 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-133839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:23:05.144619  286310 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:23:05.174107  286310 cni.go:84] Creating CNI manager for ""
	I1119 22:23:05.174131  286310 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:05.174146  286310 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:23:05.174165  286310 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-133839 NodeName:kubernetes-upgrade-133839 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:23:05.174920  286310 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-133839"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:23:05.175032  286310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:23:05.185665  286310 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:23:05.185741  286310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:23:05.194710  286310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1119 22:23:05.209129  286310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:23:05.222603  286310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1119 22:23:05.235675  286310 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:23:05.240236  286310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:05.407269  286310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:05.423513  286310 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839 for IP: 192.168.76.2
	I1119 22:23:05.423538  286310 certs.go:195] generating shared ca certs ...
	I1119 22:23:05.423557  286310 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.423725  286310 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:23:05.423779  286310 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:23:05.423789  286310 certs.go:257] generating profile certs ...
	I1119 22:23:05.423938  286310 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key
	I1119 22:23:05.424016  286310 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.key.7b5ba011
	I1119 22:23:05.424060  286310 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.key
	I1119 22:23:05.424213  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:23:05.424264  286310 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:23:05.424274  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:23:05.424305  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:23:05.424333  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:23:05.424358  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:23:05.424410  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:23:05.425266  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:23:05.454516  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:23:05.480826  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:23:05.504307  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:23:05.532268  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1119 22:23:05.567173  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:23:05.594103  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:23:05.620056  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:23:05.643804  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:23:05.667446  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:23:05.696268  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:23:05.725121  286310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:23:05.766871  286310 ssh_runner.go:195] Run: openssl version
	I1119 22:23:05.812278  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:23:05.843457  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.859132  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.859209  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.950305  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:23:05.967930  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:23:05.982903  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.989718  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.989773  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:23:06.083354  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:23:06.107662  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:23:06.137374  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.146914  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.146980  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.217031  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:23:06.237962  286310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:23:06.244988  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:23:06.307687  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:23:06.369003  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:23:06.415194  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:23:06.462108  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:23:06.520233  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:23:06.557314  286310 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-133839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133839 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:06.557433  286310 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:23:06.557491  286310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:23:06.590993  286310 cri.go:89] found id: "e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449"
	I1119 22:23:06.591025  286310 cri.go:89] found id: "b1850c507b8d7b55d64e9560edb98ea69ba385876fef8cfd3990158117278246"
	I1119 22:23:06.591031  286310 cri.go:89] found id: "4cd8c2005e8b0877eb2dd9714839c24aacf507d93fd07247c351abe430820f77"
	I1119 22:23:06.591036  286310 cri.go:89] found id: "3c390bebf1711248eb6ba3c2e76260c397d77fba97d83b003d5a65442330ea0d"
	I1119 22:23:06.591041  286310 cri.go:89] found id: "0b0fe3ba5621fe86ba74fa964ae824ec11bef11ced5d8ea12e42366b74988ae3"
	I1119 22:23:06.591046  286310 cri.go:89] found id: "244c93789be8217b040372f4bfd570deb19baa15d6e0d90a065d489983f16a8e"
	I1119 22:23:06.591053  286310 cri.go:89] found id: "a5b71ee7a7994d78ff08be65acf5df1b8e4cfa1aa223e09b338aaebf06ada4f5"
	I1119 22:23:06.591057  286310 cri.go:89] found id: "98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111"
	I1119 22:23:06.591060  286310 cri.go:89] found id: ""
	I1119 22:23:06.591109  286310 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 22:23:06.621722  286310 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","pid":10342,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c/rootfs","created":"2025-11-19T22:22:40.547082186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-133839_a73bb1d93b00961144fed68962189df9","io.kubernete
s.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535","pid":11153,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535/rootfs","created":"2025-11-19T22:22:50.782298407Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d
0854795f217cb90a0535","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-133839_7dffba8d778115ced44b1ff92d7a1c7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44b1ff92d7a1c7d"},"owner":"root"},{"ociVersion":"1.2.1","id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","pid":10319,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a/rootfs","created":"2025-11-19T22:22:40.542330901Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernete
s.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-133839_0ba3d982eebe7faf07b0096bf84838b5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf","pid":10452,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf/rootfs","created":"2025-11-19T22:22:40.66540927Z","annotations":{"io.kubern
etes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","pid":11234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8/rootfs","created":"2025-11-19T22:22:51.473965449Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/p
ause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mmpvz_328cb4c9-8a50-4e7a-bc3f-84fa90bfb493","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mmpvz","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"328cb4c9-8a50-4e7a-bc3f-84fa90bfb493"},"owner":"root"},{"ociVersion":"1.2.1","id":"7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","pid":12252,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc/rootfs","created":"2025-11-19T22:23:
05.747249242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-133839_2daa2a824df8c95f73f0a9a59dbe7a36","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner":"root"},{"ociVersion":"1.2.1","id":"87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710","pid":10506,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de8
5710","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710/rootfs","created":"2025-11-19T22:22:40.692429634Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner":"root"},{"ociVersion":"1.2.1","id":"8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","pid":11966,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e061dd59ede876b86e1a7d997
e4443482d81b306fb4e751f2e4e0652b012ec6/rootfs","created":"2025-11-19T22:23:04.904760302Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-133839_0ba3d982eebe7faf07b0096bf84838b5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111","pid":11310,"status":"running","bundle":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111/rootfs","created":"2025-11-19T22:22:53.997448247Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","io.kubernetes.cri.sandbox-name":"kindnet-mmpvz","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"328cb4c9-8a50-4e7a-bc3f-84fa90bfb493"},"owner":"root"},{"ociVersion":"1.2.1","id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","pid":12261,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","rootfs":"/run/containerd/io.cont
ainerd.runtime.v2.task/k8s.io/a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d/rootfs","created":"2025-11-19T22:23:05.77526698Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-tv8vg_4fa9d59c-bb51-48df-90a7-5d8964136650","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-tv8vg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4fa9d59c-bb51-48df-90a7-5d8964136650"},"owner":"root"},{"ociVersion":"1.2.1","id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","pid":10385,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce/rootfs","created":"2025-11-19T22:22:40.585328115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-133839_2daa2a824df8c95f73f0a9a59dbe7a36","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner
":"root"},{"ociVersion":"1.2.1","id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","pid":10393,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848/rootfs","created":"2025-11-19T22:22:40.582268242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-133839_7dffba8d778115ced44b1ff92d7a1c7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kub
ernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44b1ff92d7a1c7d"},"owner":"root"},{"ociVersion":"1.2.1","id":"dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee","pid":10469,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee/rootfs","created":"2025-11-19T22:22:40.681427399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","pid":11161,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867/rootfs","created":"2025-11-19T22:22:50.786656045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-133839_a73bb1d93b0096
1144fed68962189df9","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449","pid":12356,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449/rootfs","created":"2025-11-19T22:23:06.071063858Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","io.kubernetes.cri.sandbox-name":"k
ube-proxy-tv8vg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4fa9d59c-bb51-48df-90a7-5d8964136650"},"owner":"root"},{"ociVersion":"1.2.1","id":"f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115","pid":10500,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115/rootfs","created":"2025-11-19T22:22:40.694516568Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44
b1ff92d7a1c7d"},"owner":"root"}]
	I1119 22:23:06.622070  286310 cri.go:126] list returned 16 containers
	I1119 22:23:06.622090  286310 cri.go:129] container: {ID:12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c Status:running}
	I1119 22:23:06.622123  286310 cri.go:131] skipping 12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c - not in ps
	I1119 22:23:06.622152  286310 cri.go:129] container: {ID:1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535 Status:running}
	I1119 22:23:06.622160  286310 cri.go:131] skipping 1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535 - not in ps
	I1119 22:23:06.622168  286310 cri.go:129] container: {ID:44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a Status:running}
	I1119 22:23:06.622180  286310 cri.go:131] skipping 44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a - not in ps
	I1119 22:23:06.622189  286310 cri.go:129] container: {ID:48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf Status:running}
	I1119 22:23:06.622194  286310 cri.go:131] skipping 48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf - not in ps
	I1119 22:23:06.622203  286310 cri.go:129] container: {ID:4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8 Status:running}
	I1119 22:23:06.622208  286310 cri.go:131] skipping 4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8 - not in ps
	I1119 22:23:06.622216  286310 cri.go:129] container: {ID:7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc Status:running}
	I1119 22:23:06.622222  286310 cri.go:131] skipping 7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc - not in ps
	I1119 22:23:06.622230  286310 cri.go:129] container: {ID:87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710 Status:running}
	I1119 22:23:06.622235  286310 cri.go:131] skipping 87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710 - not in ps
	I1119 22:23:06.622242  286310 cri.go:129] container: {ID:8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6 Status:running}
	I1119 22:23:06.622247  286310 cri.go:131] skipping 8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6 - not in ps
	I1119 22:23:06.622251  286310 cri.go:129] container: {ID:98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111 Status:running}
	I1119 22:23:06.622261  286310 cri.go:135] skipping {98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111 running}: state = "running", want "paused"
	I1119 22:23:06.622270  286310 cri.go:129] container: {ID:a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d Status:running}
	I1119 22:23:06.622281  286310 cri.go:131] skipping a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d - not in ps
	I1119 22:23:06.622285  286310 cri.go:129] container: {ID:bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce Status:running}
	I1119 22:23:06.622291  286310 cri.go:131] skipping bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce - not in ps
	I1119 22:23:06.622298  286310 cri.go:129] container: {ID:cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848 Status:running}
	I1119 22:23:06.622306  286310 cri.go:131] skipping cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848 - not in ps
	I1119 22:23:06.622315  286310 cri.go:129] container: {ID:dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee Status:running}
	I1119 22:23:06.622319  286310 cri.go:131] skipping dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee - not in ps
	I1119 22:23:06.622327  286310 cri.go:129] container: {ID:e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867 Status:running}
	I1119 22:23:06.622336  286310 cri.go:131] skipping e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867 - not in ps
	I1119 22:23:06.622344  286310 cri.go:129] container: {ID:e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449 Status:running}
	I1119 22:23:06.622355  286310 cri.go:135] skipping {e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449 running}: state = "running", want "paused"
	I1119 22:23:06.622364  286310 cri.go:129] container: {ID:f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115 Status:running}
	I1119 22:23:06.622372  286310 cri.go:131] skipping f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115 - not in ps
	I1119 22:23:06.622418  286310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:23:06.632054  286310 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:23:06.632073  286310 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:23:06.632140  286310 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:23:06.641759  286310 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:23:06.642570  286310 kubeconfig.go:125] found "kubernetes-upgrade-133839" server: "https://192.168.76.2:8443"
	I1119 22:23:06.643563  286310 kapi.go:59] client config for kubernetes-upgrade-133839: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key", CAFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:23:06.644022  286310 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:23:06.644043  286310 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:23:06.644050  286310 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:23:06.644060  286310 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:23:06.644065  286310 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:23:06.644383  286310 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:23:06.653823  286310 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 22:23:06.653865  286310 kubeadm.go:602] duration metric: took 21.785301ms to restartPrimaryControlPlane
	I1119 22:23:06.653875  286310 kubeadm.go:403] duration metric: took 96.575435ms to StartCluster
	I1119 22:23:06.653916  286310 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:06.653994  286310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:06.655154  286310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:06.655432  286310 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:23:06.655568  286310 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:23:06.655646  286310 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:06.655681  286310 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-133839"
	I1119 22:23:06.655700  286310 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-133839"
	I1119 22:23:06.655719  286310 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-133839"
	I1119 22:23:06.655702  286310 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-133839"
	W1119 22:23:06.655807  286310 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:23:06.655836  286310 host.go:66] Checking if "kubernetes-upgrade-133839" exists ...
	I1119 22:23:06.656106  286310 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:23:06.656355  286310 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:23:06.658386  286310 out.go:179] * Verifying Kubernetes components...
	I1119 22:23:06.659906  286310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:06.683464  286310 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4d83a706b7766       56cc512116c8f       7 seconds ago       Running             busybox                   0                   6f63218b464b9       busybox                                                default
	810caa6ef2edc       52546a367cc9e       13 seconds ago      Running             coredns                   0                   aca741cb294cb       coredns-66bc5c9577-f5cqw                               kube-system
	a1d8eb49da113       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   ce6b6c5254d45       storage-provisioner                                    kube-system
	72a01e4ba7db0       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   1fbce5df09294       kindnet-ml6h4                                          kube-system
	7cb9dc7e8e5c6       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   0c00ab6626f2b       kube-proxy-r2sgg                                       kube-system
	eb1659c62a6af       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   d7204ee6217df       kube-scheduler-default-k8s-diff-port-409240            kube-system
	2f7c6aef7e56e       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   d8e6cb3eb629e       kube-apiserver-default-k8s-diff-port-409240            kube-system
	5167af3d80ffd       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   d79dead10e2ce       kube-controller-manager-default-k8s-diff-port-409240   kube-system
	d38b3d9548d61       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   ef19fe6ed2ca3       etcd-default-k8s-diff-port-409240                      kube-system
	
	
	==> containerd <==
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.322787744Z" level=info msg="StartContainer for \"a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.325307164Z" level=info msg="connecting to shim a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546" address="unix:///run/containerd/s/6dcae8d39a0b5dcfd30cd1013c5df08dd04f4758fcbb35fe45c2446c7b042307" protocol=ttrpc version=3
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.327050450Z" level=info msg="CreateContainer within sandbox \"aca741cb294cbae9a1df7d3b32e570c8c906aecf9bd3edf4f7ba815f02c6ffec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.335556765Z" level=info msg="Container 810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.343834979Z" level=info msg="CreateContainer within sandbox \"aca741cb294cbae9a1df7d3b32e570c8c906aecf9bd3edf4f7ba815f02c6ffec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.344380569Z" level=info msg="StartContainer for \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.345396401Z" level=info msg="connecting to shim 810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85" address="unix:///run/containerd/s/2fc39d63b116a1ff20c366ca1d4d88883d14ac0100f4f47879a1bdae9ebd425a" protocol=ttrpc version=3
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.395284737Z" level=info msg="StartContainer for \"a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546\" returns successfully"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.409784018Z" level=info msg="StartContainer for \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\" returns successfully"
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.102585147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:eee884c0-8976-48f4-8b93-86a4bc150754,Namespace:default,Attempt:0,}"
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.155603467Z" level=info msg="connecting to shim 6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2" address="unix:///run/containerd/s/af761d2e20de794dbb47f057a60c2e52887f37a0b3b075c22124a2598aabd4a5" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.240723137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:eee884c0-8976-48f4-8b93-86a4bc150754,Namespace:default,Attempt:0,} returns sandbox id \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\""
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.243400864Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.390779692Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.391636689Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.392809709Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.396974406Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.397508336Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.15405943s"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.397552872Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.402481573Z" level=info msg="CreateContainer within sandbox \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.410068998Z" level=info msg="Container 4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.417619619Z" level=info msg="CreateContainer within sandbox \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.418313393Z" level=info msg="StartContainer for \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.419258335Z" level=info msg="connecting to shim 4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e" address="unix:///run/containerd/s/af761d2e20de794dbb47f057a60c2e52887f37a0b3b075c22124a2598aabd4a5" protocol=ttrpc version=3
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.481717143Z" level=info msg="StartContainer for \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\" returns successfully"
	
	
	==> coredns [810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46246 - 40558 "HINFO IN 3435086917380568170.3234967037515506881. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066971238s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-409240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-409240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-409240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_22_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-409240
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:23:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-409240
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                25b2f9d9-4024-4506-99ca-57d79a4aba10
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-f5cqw                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-409240                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-ml6h4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-409240             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-409240    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-r2sgg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-409240             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-409240 event: Registered Node default-k8s-diff-port-409240 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d38b3d9548d6178a52dc3b1ff81520bb354f6add1ea3feaae5043525a24acf02] <==
	{"level":"warn","ts":"2025-11-19T22:22:33.009471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.017434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.034076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.043421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.052425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.059739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.068032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.076401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.095998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.105855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.114236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.184838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49796","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:22:36.356589Z","caller":"traceutil/trace.go:172","msg":"trace[1531196741] transaction","detail":"{read_only:false; response_revision:264; number_of_response:1; }","duration":"104.72599ms","start":"2025-11-19T22:22:36.251842Z","end":"2025-11-19T22:22:36.356568Z","steps":["trace[1531196741] 'process raft request'  (duration: 104.203074ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:43.414526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.669121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-ml6h4\" limit:1 ","response":"range_response_count:1 size:5340"}
	{"level":"info","ts":"2025-11-19T22:22:43.414632Z","caller":"traceutil/trace.go:172","msg":"trace[672105413] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-ml6h4; range_end:; response_count:1; response_revision:418; }","duration":"119.788443ms","start":"2025-11-19T22:22:43.294827Z","end":"2025-11-19T22:22:43.414615Z","steps":["trace[672105413] 'range keys from in-memory index tree'  (duration: 119.537649ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:53.466823Z","caller":"traceutil/trace.go:172","msg":"trace[551475231] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"228.146488ms","start":"2025-11-19T22:22:53.238653Z","end":"2025-11-19T22:22:53.466800Z","steps":["trace[551475231] 'process raft request'  (duration: 227.952842ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:53.655671Z","caller":"traceutil/trace.go:172","msg":"trace[2096458719] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:445; }","duration":"170.449429ms","start":"2025-11-19T22:22:53.485201Z","end":"2025-11-19T22:22:53.655650Z","steps":["trace[2096458719] 'read index received'  (duration: 170.436538ms)","trace[2096458719] 'applied index is now lower than readState.Index'  (duration: 11.568µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:53.697970Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.750546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-409240\" limit:1 ","response":"range_response_count:1 size:4648"}
	{"level":"info","ts":"2025-11-19T22:22:53.698040Z","caller":"traceutil/trace.go:172","msg":"trace[787510816] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-409240; range_end:; response_count:1; response_revision:432; }","duration":"212.834189ms","start":"2025-11-19T22:22:53.485191Z","end":"2025-11-19T22:22:53.698025Z","steps":["trace[787510816] 'agreement among raft nodes before linearized reading'  (duration: 170.561877ms)","trace[787510816] 'range keys from in-memory index tree'  (duration: 42.030043ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:53.698077Z","caller":"traceutil/trace.go:172","msg":"trace[1227998274] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"453.483537ms","start":"2025-11-19T22:22:53.244572Z","end":"2025-11-19T22:22:53.698056Z","steps":["trace[1227998274] 'process raft request'  (duration: 411.115494ms)","trace[1227998274] 'compare'  (duration: 42.246072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:53.698666Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:22:53.244547Z","time spent":"453.652917ms","remote":"127.0.0.1:49052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4564,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-409240\" mod_revision:356 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-409240\" value_size:4510 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-409240\" > >"}
	{"level":"warn","ts":"2025-11-19T22:22:55.910940Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.881407ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:22:55.911042Z","caller":"traceutil/trace.go:172","msg":"trace[419586803] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:456; }","duration":"120.002318ms","start":"2025-11-19T22:22:55.791025Z","end":"2025-11-19T22:22:55.911027Z","steps":["trace[419586803] 'range keys from in-memory index tree'  (duration: 119.8203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:55.911160Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.5529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:22:55.911191Z","caller":"traceutil/trace.go:172","msg":"trace[1202920222] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:456; }","duration":"256.590275ms","start":"2025-11-19T22:22:55.654592Z","end":"2025-11-19T22:22:55.911182Z","steps":["trace[1202920222] 'range keys from in-memory index tree'  (duration: 256.493047ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:23:08 up  1:05,  0 user,  load average: 4.85, 3.81, 2.45
	Linux default-k8s-diff-port-409240 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72a01e4ba7db0cbbf56094887293f9bd55e892f2efc3c5f638add5dd05a0771d] <==
	I1119 22:22:42.770285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:22:42.770685       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:22:42.770828       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:22:42.770845       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:22:42.770858       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:22:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:22:43.059265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:22:43.059336       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:22:43.059353       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:22:43.059579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:22:43.459466       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:22:43.459492       1 metrics.go:72] Registering metrics
	I1119 22:22:43.459542       1 controller.go:711] "Syncing nftables rules"
	I1119 22:22:53.062961       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:22:53.063027       1 main.go:301] handling current node
	I1119 22:23:03.059952       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:23:03.059992       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f7c6aef7e56e549c365f54517c771a4d1d1d70e8fbebb03436e2207659e9842] <==
	I1119 22:22:33.732244       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:22:33.737153       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:33.738196       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:22:33.740273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:22:33.746142       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:33.746388       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:22:33.782669       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:22:34.733984       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:22:34.777955       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:22:34.777981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:22:35.401694       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:22:35.444799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:22:35.539293       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:22:35.545289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:22:35.546543       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:22:35.550522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:22:36.385678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:22:36.396160       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:22:36.413380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:22:36.423969       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:22:41.652154       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:22:42.252173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:22:42.308563       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:42.318305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:23:06.964768       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:39034: use of closed network connection
	
	
	==> kube-controller-manager [5167af3d80ffdb05096166d9330cad0299100bafcb9e0af013f17c31936a27c7] <==
	I1119 22:22:41.396000       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:41.396024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:22:41.396032       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:22:41.396524       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:22:41.396540       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:22:41.396572       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:22:41.396705       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:22:41.396961       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:22:41.397196       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:22:41.397553       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:22:41.397230       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:22:41.397694       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:22:41.397680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:22:41.397213       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:22:41.399426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:22:41.402055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:41.402755       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:22:41.406720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:22:41.406838       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:22:41.407233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-409240"
	I1119 22:22:41.407289       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:22:41.422505       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:41.439693       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:41.454097       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:22:56.409707       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7cb9dc7e8e5c6d569542be180135a6b54fa081a5e0d488813e3772ad7d8749b8] <==
	I1119 22:22:42.295142       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:22:42.357439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:22:42.457715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:22:42.457752       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:22:42.457834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:22:42.485008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:22:42.485078       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:22:42.491736       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:22:42.492195       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:22:42.492225       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:22:42.494035       1 config.go:200] "Starting service config controller"
	I1119 22:22:42.497010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:22:42.494625       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:22:42.497067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:22:42.494637       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:22:42.497081       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:22:42.494278       1 config.go:309] "Starting node config controller"
	I1119 22:22:42.497092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:22:42.497293       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:22:42.598645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:22:42.598688       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:22:42.598728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb1659c62a6af8497a707325868baae27ff17f0f302531a2e636ac52a83637e0] <==
	E1119 22:22:33.705705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:33.705737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:22:33.705821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:22:33.705866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:33.705871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:22:33.706031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:33.706626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:33.708738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:34.508115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:34.510102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:34.555481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:34.565837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:22:34.676508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:34.733477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:34.811547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:22:34.924193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:34.946568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:34.986400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:22:35.025070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:22:35.031350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:22:35.044773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:22:35.070125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:35.103437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:22:35.140810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:22:36.902024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.274922    1446 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: E1119 22:22:37.285660    1446 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-409240\" already exists" pod="kube-system/etcd-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: E1119 22:22:37.286760    1446 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-409240\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.300807    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-409240" podStartSLOduration=1.300783788 podStartE2EDuration="1.300783788s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:37.300653374 +0000 UTC m=+1.151940170" watchObservedRunningTime="2025-11-19 22:22:37.300783788 +0000 UTC m=+1.152070579"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.300928    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-409240" podStartSLOduration=1.300922094 podStartE2EDuration="1.300922094s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:37.286011705 +0000 UTC m=+1.137298498" watchObservedRunningTime="2025-11-19 22:22:37.300922094 +0000 UTC m=+1.152208885"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.405721    1446 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.406652    1446 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778058    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmxq\" (UniqueName: \"kubernetes.io/projected/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-kube-api-access-jtmxq\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778110    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b687585b-f9cc-4321-9055-9b5a448fd38f-kube-proxy\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778137    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b687585b-f9cc-4321-9055-9b5a448fd38f-lib-modules\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778163    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-cni-cfg\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778194    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-xtables-lock\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778221    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-lib-modules\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778250    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b687585b-f9cc-4321-9055-9b5a448fd38f-xtables-lock\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778272    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jlb\" (UniqueName: \"kubernetes.io/projected/b687585b-f9cc-4321-9055-9b5a448fd38f-kube-api-access-57jlb\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:42 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:42.308588    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r2sgg" podStartSLOduration=1.308561977 podStartE2EDuration="1.308561977s" podCreationTimestamp="2025-11-19 22:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:42.308407355 +0000 UTC m=+6.159694147" watchObservedRunningTime="2025-11-19 22:22:42.308561977 +0000 UTC m=+6.159848767"
	Nov 19 22:22:43 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:43.510959    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ml6h4" podStartSLOduration=2.510934008 podStartE2EDuration="2.510934008s" podCreationTimestamp="2025-11-19 22:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:43.49638496 +0000 UTC m=+7.347671750" watchObservedRunningTime="2025-11-19 22:22:43.510934008 +0000 UTC m=+7.362220798"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.236317    1446 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868092    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8cc\" (UniqueName: \"kubernetes.io/projected/df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2-kube-api-access-kn8cc\") pod \"coredns-66bc5c9577-f5cqw\" (UID: \"df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2\") " pod="kube-system/coredns-66bc5c9577-f5cqw"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868164    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d31d0ff-61d4-4948-a718-08c43b520656-tmp\") pod \"storage-provisioner\" (UID: \"2d31d0ff-61d4-4948-a718-08c43b520656\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868206    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bvw\" (UniqueName: \"kubernetes.io/projected/2d31d0ff-61d4-4948-a718-08c43b520656-kube-api-access-v7bvw\") pod \"storage-provisioner\" (UID: \"2d31d0ff-61d4-4948-a718-08c43b520656\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868955    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2-config-volume\") pod \"coredns-66bc5c9577-f5cqw\" (UID: \"df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2\") " pod="kube-system/coredns-66bc5c9577-f5cqw"
	Nov 19 22:22:55 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:55.341616    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.341591667 podStartE2EDuration="12.341591667s" podCreationTimestamp="2025-11-19 22:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:55.341280911 +0000 UTC m=+19.192567784" watchObservedRunningTime="2025-11-19 22:22:55.341591667 +0000 UTC m=+19.192878457"
	Nov 19 22:22:55 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:55.356935    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f5cqw" podStartSLOduration=13.356855823 podStartE2EDuration="13.356855823s" podCreationTimestamp="2025-11-19 22:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:55.356508044 +0000 UTC m=+19.207794846" watchObservedRunningTime="2025-11-19 22:22:55.356855823 +0000 UTC m=+19.208142613"
	Nov 19 22:22:57 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:57.900538    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t44\" (UniqueName: \"kubernetes.io/projected/eee884c0-8976-48f4-8b93-86a4bc150754-kube-api-access-p4t44\") pod \"busybox\" (UID: \"eee884c0-8976-48f4-8b93-86a4bc150754\") " pod="default/busybox"
	
	
	==> storage-provisioner [a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546] <==
	I1119 22:22:54.405919       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:22:54.417129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:22:54.417256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:22:54.420477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:54.429346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:54.429577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:22:54.429658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c37bf72e-2310-4ea3-bd14-d23e7de696c3", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a became leader
	I1119 22:22:54.429762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a!
	W1119 22:22:54.434715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:54.440782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:54.530327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a!
	W1119 22:22:56.445806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:56.452525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:58.456903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:58.461554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:00.465419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:00.469974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:02.473629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:02.479458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:04.484788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:04.494725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:06.499032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:06.502939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:08.508315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:08.516127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-409240
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-409240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917",
	        "Created": "2025-11-19T22:22:21.77385695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278622,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:22:21.815590779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/hosts",
	        "LogPath": "/var/lib/docker/containers/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917/3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917-json.log",
	        "Name": "/default-k8s-diff-port-409240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-409240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-409240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b17c5bd31e911d5318e1d15c108200bc41b828ccdc2a42595cb1e5105575917",
	                "LowerDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a-init/diff:/var/lib/docker/overlay2/b09480e350abbb2f4f48b19448dc8e9ddd0de679fdb8cd59ebc5b758a29b344e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/392152a539b2b6aa19f080300004655fe7ee996f97e05c5db8a867188aadb05a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-409240",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-409240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-409240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-409240",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-409240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e68c02bb9ea3360a612f1134ddaf57df5b051c02e3ef2cfb13033f5b87534b1e",
	            "SandboxKey": "/var/run/docker/netns/e68c02bb9ea3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-409240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a65c679966f5fdb88a7dd03de3ed6928298f7cf3afd6677cb80dabeb6ed9ab1f",
	                    "EndpointID": "57ff73ba7b4ad1304fa06523327a71a8daf478da794015e51f865c70924e7297",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "a2:f6:e1:37:94:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-409240",
	                        "3b17c5bd31e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-409240 logs -n 25: (1.41475208s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ unpause │ -p old-k8s-version-975700 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ delete  │ -p old-k8s-version-975700                                                                                                                                                                                                                           │ old-k8s-version-975700       │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:21 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:21 UTC │ 19 Nov 25 22:22 UTC │
	│ image   │ no-preload-638439 image list --format=json                                                                                                                                                                                                          │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ pause   │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ unpause │ -p no-preload-638439 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p no-preload-638439                                                                                                                                                                                                                                │ no-preload-638439            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p disable-driver-mounts-837642                                                                                                                                                                                                                     │ disable-driver-mounts-837642 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409240 │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ delete  │ -p cert-expiration-207460                                                                                                                                                                                                                           │ cert-expiration-207460       │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-133839    │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-133839    │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-299509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ stop    │ -p embed-certs-299509 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-982287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ stop    │ -p newest-cni-982287 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-982287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │ 19 Nov 25 22:22 UTC │
	│ start   │ -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-982287            │ jenkins │ v1.37.0 │ 19 Nov 25 22:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-299509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:23 UTC │ 19 Nov 25 22:23 UTC │
	│ start   │ -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-299509           │ jenkins │ v1.37.0 │ 19 Nov 25 22:23 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-133839                                                                                                                                                                                                                        │ kubernetes-upgrade-133839    │ jenkins │ v1.37.0 │ 19 Nov 25 22:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:23:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:23:03.157576  291097 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:23:03.158164  291097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:03.158175  291097 out.go:374] Setting ErrFile to fd 2...
	I1119 22:23:03.158189  291097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:03.160681  291097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:23:03.161677  291097 out.go:368] Setting JSON to false
	I1119 22:23:03.163195  291097 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3923,"bootTime":1763587060,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:23:03.163350  291097 start.go:143] virtualization: kvm guest
	I1119 22:23:03.165935  291097 out.go:179] * [embed-certs-299509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:23:03.168168  291097 notify.go:221] Checking for updates...
	I1119 22:23:03.168751  291097 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:23:03.170450  291097 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:23:03.172318  291097 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:03.173692  291097 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:23:03.174947  291097 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:23:03.176377  291097 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:23:03.178127  291097 config.go:182] Loaded profile config "embed-certs-299509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:03.178793  291097 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:23:03.210556  291097 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:23:03.210658  291097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:23:03.292238  291097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:23:03.279121572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:23:03.292398  291097 docker.go:319] overlay module found
	I1119 22:23:03.294154  291097 out.go:179] * Using the docker driver based on existing profile
	I1119 22:23:03.295345  291097 start.go:309] selected driver: docker
	I1119 22:23:03.295362  291097 start.go:930] validating driver "docker" against &{Name:embed-certs-299509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-299509 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:03.295472  291097 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:23:03.296267  291097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:23:03.387380  291097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:23:03.374795628 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:23:03.387761  291097 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:23:03.387801  291097 cni.go:84] Creating CNI manager for ""
	I1119 22:23:03.387863  291097 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:03.387986  291097 start.go:353] cluster config:
	{Name:embed-certs-299509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-299509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:03.389793  291097 out.go:179] * Starting "embed-certs-299509" primary control-plane node in "embed-certs-299509" cluster
	I1119 22:23:03.391334  291097 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:23:03.392412  291097 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:23:03.393463  291097 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:23:03.393516  291097 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 22:23:03.393534  291097 cache.go:65] Caching tarball of preloaded images
	I1119 22:23:03.393632  291097 preload.go:238] Found /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 22:23:03.393653  291097 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:23:03.393785  291097 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/embed-certs-299509/config.json ...
	I1119 22:23:03.394071  291097 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:23:03.426146  291097 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:23:03.426190  291097 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:23:03.426206  291097 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:23:03.426246  291097 start.go:360] acquireMachinesLock for embed-certs-299509: {Name:mk01324288749056d93755268e5197a67d733c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:23:03.426308  291097 start.go:364] duration metric: took 38.25µs to acquireMachinesLock for "embed-certs-299509"
	I1119 22:23:03.426330  291097 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:23:03.426343  291097 fix.go:54] fixHost starting: 
	I1119 22:23:03.426633  291097 cli_runner.go:164] Run: docker container inspect embed-certs-299509 --format={{.State.Status}}
	I1119 22:23:03.452005  291097 fix.go:112] recreateIfNeeded on embed-certs-299509: state=Stopped err=<nil>
	W1119 22:23:03.452048  291097 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:23:02.736661  289819 ssh_runner.go:195] Run: systemctl --version
	I1119 22:23:02.812191  289819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:23:02.818626  289819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:23:02.818718  289819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:23:02.828777  289819 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:23:02.828801  289819 start.go:496] detecting cgroup driver to use...
	I1119 22:23:02.828845  289819 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:23:02.828920  289819 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:23:02.853350  289819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:23:02.871869  289819 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:23:02.871949  289819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:23:02.893720  289819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:23:02.912464  289819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:23:03.036114  289819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:23:03.149726  289819 docker.go:234] disabling docker service ...
	I1119 22:23:03.149802  289819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:23:03.169470  289819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:23:03.188181  289819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:23:03.305350  289819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:23:03.439604  289819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:23:03.457497  289819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:23:03.476108  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:23:03.488537  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:23:03.501118  289819 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:23:03.501294  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:23:03.515209  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:03.528017  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:23:03.540494  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:03.552364  289819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:23:03.565022  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:23:03.576305  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:23:03.589496  289819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:23:03.603528  289819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:23:03.624005  289819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:23:03.635724  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:03.758525  289819 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:23:04.070301  289819 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:23:04.070393  289819 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:23:04.075986  289819 start.go:564] Will wait 60s for crictl version
	I1119 22:23:04.076174  289819 ssh_runner.go:195] Run: which crictl
	I1119 22:23:04.081168  289819 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:23:04.113615  289819 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:23:04.113679  289819 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:04.143037  289819 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:04.265688  289819 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:23:04.307205  289819 cli_runner.go:164] Run: docker network inspect newest-cni-982287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:23:04.334262  289819 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:23:04.340151  289819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:23:04.356873  289819 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:23:04.358574  289819 kubeadm.go:884] updating cluster {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:23:04.358770  289819 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:23:04.358841  289819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:23:04.398139  289819 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:23:04.398162  289819 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:23:04.398229  289819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:23:04.434518  289819 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:23:04.434545  289819 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:23:04.434555  289819 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:23:04.434692  289819 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-982287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:23:04.434757  289819 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:23:04.473195  289819 cni.go:84] Creating CNI manager for ""
	I1119 22:23:04.473224  289819 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:04.473244  289819 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:23:04.473273  289819 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-982287 NodeName:newest-cni-982287 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:23:04.476476  289819 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-982287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:23:04.476589  289819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:23:04.489907  289819 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:23:04.489991  289819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:23:04.502074  289819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 22:23:04.519629  289819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:23:04.535867  289819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 22:23:04.552231  289819 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:23:04.556322  289819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:23:04.581469  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:04.693660  289819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:04.718231  289819 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287 for IP: 192.168.85.2
	I1119 22:23:04.718257  289819 certs.go:195] generating shared ca certs ...
	I1119 22:23:04.718280  289819 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:04.718419  289819 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:23:04.718456  289819 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:23:04.718466  289819 certs.go:257] generating profile certs ...
	I1119 22:23:04.718538  289819 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/client.key
	I1119 22:23:04.718592  289819 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key.9887c082
	I1119 22:23:04.718627  289819 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key
	I1119 22:23:04.718723  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:23:04.718762  289819 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:23:04.718772  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:23:04.718795  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:23:04.718816  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:23:04.718836  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:23:04.718873  289819 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:23:04.719523  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:23:04.740425  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:23:04.767473  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:23:04.788643  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:23:04.812826  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:23:04.840949  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:23:04.874334  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:23:04.904591  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/newest-cni-982287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:23:04.934179  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:23:04.960987  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:23:04.986601  289819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:23:05.015180  289819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:23:05.034021  289819 ssh_runner.go:195] Run: openssl version
	I1119 22:23:05.043564  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:23:05.055163  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.060517  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.060591  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.110854  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:23:05.121131  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:23:05.132030  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.137183  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.137252  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.181980  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:23:05.190709  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:23:05.200151  289819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.204199  289819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.204246  289819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:23:05.241474  289819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:23:05.251448  289819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:23:05.256282  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:23:05.317457  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:23:05.376306  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:23:05.430481  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:23:05.489491  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:23:05.548292  289819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:23:05.609139  289819 kubeadm.go:401] StartCluster: {Name:newest-cni-982287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-982287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:05.609282  289819 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:23:05.609365  289819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:23:05.648293  289819 cri.go:89] found id: "b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b"
	I1119 22:23:05.648314  289819 cri.go:89] found id: "a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4"
	I1119 22:23:05.648320  289819 cri.go:89] found id: "d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be"
	I1119 22:23:05.648324  289819 cri.go:89] found id: "b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c"
	I1119 22:23:05.648328  289819 cri.go:89] found id: "f9c3e3ecb27478acc04ca5aa5d95be81bb8e6c96ff78b5374efbedc73cdb29d6"
	I1119 22:23:05.648333  289819 cri.go:89] found id: "29e1f5f1cb1b99b3a66e14a972ea7feb7293705bd1f6592ee1778524e3d3123d"
	I1119 22:23:05.648337  289819 cri.go:89] found id: "40a972971a9ab382c5ce85f8e882c14361f78c8897c6eb4b30e506a82326e560"
	I1119 22:23:05.648351  289819 cri.go:89] found id: "5929321495932c0c7ce8e595969edf334477c61b5b8615fe0fb88171d9eab230"
	I1119 22:23:05.648355  289819 cri.go:89] found id: "b0ed6ae5b675bb949adc944d3cc7d20404c52d58c9b7941be15cb28ed46d31bd"
	I1119 22:23:05.648365  289819 cri.go:89] found id: "b7ca1178966cc3f9bd224b8a6e7f789c2cbc30d136613381476b117423f76175"
	I1119 22:23:05.648374  289819 cri.go:89] found id: ""
	I1119 22:23:05.648417  289819 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 22:23:05.676356  289819 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","pid":831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951/rootfs","created":"2025-11-19T22:23:05.33900885Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-982287_6e6c969e5deca3d08506d67c9b1d82a2","io.kubernetes.cri.sandbox-memory":"0","io.
kubernetes.cri.sandbox-name":"etcd-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6e6c969e5deca3d08506d67c9b1d82a2"},"owner":"root"},{"ociVersion":"1.2.1","id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051/rootfs","created":"2025-11-19T22:23:05.352104894Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","io.kubernetes.cri.sandbox-log-direct
ory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-982287_2df7730119fc52593f99444d647fbfb1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2df7730119fc52593f99444d647fbfb1"},"owner":"root"},{"ociVersion":"1.2.1","id":"a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4/rootfs","created":"2025-11-19T22:23:05.504621335Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"abd9d35da1e9bc4
6eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"71354f3089fb5cfec338c075a5031d58"},"owner":"root"},{"ociVersion":"1.2.1","id":"abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56/rootfs","created":"2025-11-19T22:23:05.356690028Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"abd9d35da1e9bc46eddaaa9fa4458
983d302ba75987e8150470cc30431c20a56","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-982287_71354f3089fb5cfec338c075a5031d58","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"71354f3089fb5cfec338c075a5031d58"},"owner":"root"},{"ociVersion":"1.2.1","id":"b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b","pid":971,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b/rootfs","created":"2025-11-19T22:23:05.506717423Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.
io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2df7730119fc52593f99444d647fbfb1"},"owner":"root"},{"ociVersion":"1.2.1","id":"b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","pid":816,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d/rootfs","created":"2025-11-19T22:23:05.334820545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"1
02","io.kubernetes.cri.sandbox-id":"b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-982287_ffae298dd0003cc89157416bc9023259","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ffae298dd0003cc89157416bc9023259"},"owner":"root"},{"ociVersion":"1.2.1","id":"b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c","pid":930,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c/rootfs","created":"2025-11-19T22:23:05.479173858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kube
rnetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6e6c969e5deca3d08506d67c9b1d82a2"},"owner":"root"},{"ociVersion":"1.2.1","id":"d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be","pid":941,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be/rootfs","created":"2025-11-19T22:23:05.482918643Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"b7d3e6db6d219fdcc3a94acf1d9a7
08d12414f6ce528902b27db6573c4dbe83d","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-982287","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ffae298dd0003cc89157416bc9023259"},"owner":"root"}]
	I1119 22:23:05.676612  289819 cri.go:126] list returned 8 containers
	I1119 22:23:05.676636  289819 cri.go:129] container: {ID:0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951 Status:running}
	I1119 22:23:05.676655  289819 cri.go:131] skipping 0ae41cc1c3304356a12f7e04027230cd00200db95bcd1364ccc13532416b3951 - not in ps
	I1119 22:23:05.676666  289819 cri.go:129] container: {ID:1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051 Status:running}
	I1119 22:23:05.676674  289819 cri.go:131] skipping 1d58f88af5f9c3ac6165c6fcf22026ff419033f3c3753ae01f6e537524b94051 - not in ps
	I1119 22:23:05.676679  289819 cri.go:129] container: {ID:a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4 Status:running}
	I1119 22:23:05.676687  289819 cri.go:135] skipping {a81dc22bffadb4e4700283d758642382781bafd9b400949aa319477bbf4716b4 running}: state = "running", want "paused"
	I1119 22:23:05.676698  289819 cri.go:129] container: {ID:abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56 Status:running}
	I1119 22:23:05.676705  289819 cri.go:131] skipping abd9d35da1e9bc46eddaaa9fa4458983d302ba75987e8150470cc30431c20a56 - not in ps
	I1119 22:23:05.676711  289819 cri.go:129] container: {ID:b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b Status:running}
	I1119 22:23:05.676717  289819 cri.go:135] skipping {b7d166c33134d44e66ca4a753e174e29a312fb2e2f29c86b1cfb68a69de31c7b running}: state = "running", want "paused"
	I1119 22:23:05.676724  289819 cri.go:129] container: {ID:b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d Status:running}
	I1119 22:23:05.676730  289819 cri.go:131] skipping b7d3e6db6d219fdcc3a94acf1d9a708d12414f6ce528902b27db6573c4dbe83d - not in ps
	I1119 22:23:05.676735  289819 cri.go:129] container: {ID:b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c Status:running}
	I1119 22:23:05.676742  289819 cri.go:135] skipping {b85e2e9bcc7d829034e5bbb16c165ae048593eeb427376782688adf2c8f4e90c running}: state = "running", want "paused"
	I1119 22:23:05.676747  289819 cri.go:129] container: {ID:d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be Status:running}
	I1119 22:23:05.676755  289819 cri.go:135] skipping {d27d2794ce97035a004943adb54793085a521eb6a4513ec91c5bc2b1338354be running}: state = "running", want "paused"
	I1119 22:23:05.676804  289819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:23:05.687151  289819 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:23:05.687174  289819 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:23:05.687304  289819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:23:05.698213  289819 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:23:05.699325  289819 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-982287" does not appear in /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:05.700295  289819 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9296/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-982287" cluster setting kubeconfig missing "newest-cni-982287" context setting]
	I1119 22:23:05.701354  289819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.703694  289819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:23:05.716240  289819 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:23:05.716289  289819 kubeadm.go:602] duration metric: took 29.108218ms to restartPrimaryControlPlane
	I1119 22:23:05.716301  289819 kubeadm.go:403] duration metric: took 107.170746ms to StartCluster
	I1119 22:23:05.716321  289819 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.716382  289819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:05.717905  289819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.718189  289819 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:23:05.718408  289819 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:23:05.718507  289819 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-982287"
	I1119 22:23:05.718533  289819 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-982287"
	W1119 22:23:05.718540  289819 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:23:05.718572  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.718645  289819 addons.go:70] Setting default-storageclass=true in profile "newest-cni-982287"
	I1119 22:23:05.718684  289819 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-982287"
	I1119 22:23:05.718933  289819 config.go:182] Loaded profile config "newest-cni-982287": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:05.718988  289819 addons.go:70] Setting metrics-server=true in profile "newest-cni-982287"
	I1119 22:23:05.719000  289819 addons.go:239] Setting addon metrics-server=true in "newest-cni-982287"
	W1119 22:23:05.719008  289819 addons.go:248] addon metrics-server should already be in state true
	I1119 22:23:05.719031  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.719109  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.719314  289819 addons.go:70] Setting dashboard=true in profile "newest-cni-982287"
	I1119 22:23:05.719330  289819 addons.go:239] Setting addon dashboard=true in "newest-cni-982287"
	W1119 22:23:05.719337  289819 addons.go:248] addon dashboard should already be in state true
	I1119 22:23:05.719361  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.719498  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.719841  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.720090  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.729495  289819 out.go:179] * Verifying Kubernetes components...
	I1119 22:23:05.731962  289819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:05.758586  289819 addons.go:239] Setting addon default-storageclass=true in "newest-cni-982287"
	W1119 22:23:05.758895  289819 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:23:05.758927  289819 host.go:66] Checking if "newest-cni-982287" exists ...
	I1119 22:23:05.759564  289819 cli_runner.go:164] Run: docker container inspect newest-cni-982287 --format={{.State.Status}}
	I1119 22:23:05.762523  289819 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:23:05.763367  289819 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:23:05.765322  289819 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:23:05.765529  289819 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:23:05.765549  289819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:23:05.765645  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.766612  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:23:05.766687  289819 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:23:05.766783  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.773182  289819 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 22:23:04.844028  286310 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (4.026589876s)
	I1119 22:23:04.844082  286310 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:23:04.844137  286310 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:23:04.844198  286310 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:23:05.144370  286310 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:23:05.144409  286310 cache_images.go:125] Successfully loaded all cached images
	I1119 22:23:05.144415  286310 cache_images.go:94] duration metric: took 11.177160398s to LoadCachedImages
	I1119 22:23:05.144429  286310 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1119 22:23:05.144559  286310 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-133839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:23:05.144619  286310 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:23:05.174107  286310 cni.go:84] Creating CNI manager for ""
	I1119 22:23:05.174131  286310 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:23:05.174146  286310 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:23:05.174165  286310 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-133839 NodeName:kubernetes-upgrade-133839 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:23:05.174920  286310 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-133839"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:23:05.175032  286310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:23:05.185665  286310 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:23:05.185741  286310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:23:05.194710  286310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1119 22:23:05.209129  286310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:23:05.222603  286310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1119 22:23:05.235675  286310 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:23:05.240236  286310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:05.407269  286310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:05.423513  286310 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839 for IP: 192.168.76.2
	I1119 22:23:05.423538  286310 certs.go:195] generating shared ca certs ...
	I1119 22:23:05.423557  286310 certs.go:227] acquiring lock for ca certs: {Name:mkfe62d1b64cfdbe1c6a3d1f38aa0edc5b9ec419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:05.423725  286310 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key
	I1119 22:23:05.423779  286310 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key
	I1119 22:23:05.423789  286310 certs.go:257] generating profile certs ...
	I1119 22:23:05.423938  286310 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key
	I1119 22:23:05.424016  286310 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.key.7b5ba011
	I1119 22:23:05.424060  286310 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.key
	I1119 22:23:05.424213  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem (1338 bytes)
	W1119 22:23:05.424264  286310 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821_empty.pem, impossibly tiny 0 bytes
	I1119 22:23:05.424274  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:23:05.424305  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:23:05.424333  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:23:05.424358  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem (1679 bytes)
	I1119 22:23:05.424410  286310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:23:05.425266  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:23:05.454516  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:23:05.480826  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:23:05.504307  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:23:05.532268  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1119 22:23:05.567173  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:23:05.594103  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:23:05.620056  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:23:05.643804  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/12821.pem --> /usr/share/ca-certificates/12821.pem (1338 bytes)
	I1119 22:23:05.667446  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /usr/share/ca-certificates/128212.pem (1708 bytes)
	I1119 22:23:05.696268  286310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:23:05.725121  286310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:23:05.766871  286310 ssh_runner.go:195] Run: openssl version
	I1119 22:23:05.812278  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:23:05.843457  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.859132  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:48 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.859209  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:23:05.950305  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:23:05.967930  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12821.pem && ln -fs /usr/share/ca-certificates/12821.pem /etc/ssl/certs/12821.pem"
	I1119 22:23:05.982903  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.989718  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12821.pem
	I1119 22:23:05.989773  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12821.pem
	I1119 22:23:06.083354  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12821.pem /etc/ssl/certs/51391683.0"
	I1119 22:23:06.107662  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128212.pem && ln -fs /usr/share/ca-certificates/128212.pem /etc/ssl/certs/128212.pem"
	I1119 22:23:06.137374  286310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.146914  286310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.146980  286310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128212.pem
	I1119 22:23:06.217031  286310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128212.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:23:06.237962  286310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:23:06.244988  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:23:06.307687  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:23:06.369003  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:23:06.415194  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:23:06.462108  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:23:06.520233  286310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:23:06.557314  286310 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-133839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133839 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:23:06.557433  286310 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:23:06.557491  286310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:23:06.590993  286310 cri.go:89] found id: "e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449"
	I1119 22:23:06.591025  286310 cri.go:89] found id: "b1850c507b8d7b55d64e9560edb98ea69ba385876fef8cfd3990158117278246"
	I1119 22:23:06.591031  286310 cri.go:89] found id: "4cd8c2005e8b0877eb2dd9714839c24aacf507d93fd07247c351abe430820f77"
	I1119 22:23:06.591036  286310 cri.go:89] found id: "3c390bebf1711248eb6ba3c2e76260c397d77fba97d83b003d5a65442330ea0d"
	I1119 22:23:06.591041  286310 cri.go:89] found id: "0b0fe3ba5621fe86ba74fa964ae824ec11bef11ced5d8ea12e42366b74988ae3"
	I1119 22:23:06.591046  286310 cri.go:89] found id: "244c93789be8217b040372f4bfd570deb19baa15d6e0d90a065d489983f16a8e"
	I1119 22:23:06.591053  286310 cri.go:89] found id: "a5b71ee7a7994d78ff08be65acf5df1b8e4cfa1aa223e09b338aaebf06ada4f5"
	I1119 22:23:06.591057  286310 cri.go:89] found id: "98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111"
	I1119 22:23:06.591060  286310 cri.go:89] found id: ""
	I1119 22:23:06.591109  286310 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 22:23:06.621722  286310 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","pid":10342,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c/rootfs","created":"2025-11-19T22:22:40.547082186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-133839_a73bb1d93b00961144fed68962189df9","io.kubernete
s.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535","pid":11153,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535/rootfs","created":"2025-11-19T22:22:50.782298407Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d
0854795f217cb90a0535","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-133839_7dffba8d778115ced44b1ff92d7a1c7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44b1ff92d7a1c7d"},"owner":"root"},{"ociVersion":"1.2.1","id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","pid":10319,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a/rootfs","created":"2025-11-19T22:22:40.542330901Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernete
s.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-133839_0ba3d982eebe7faf07b0096bf84838b5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf","pid":10452,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf/rootfs","created":"2025-11-19T22:22:40.66540927Z","annotations":{"io.kubern
etes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","pid":11234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8/rootfs","created":"2025-11-19T22:22:51.473965449Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/p
ause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mmpvz_328cb4c9-8a50-4e7a-bc3f-84fa90bfb493","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mmpvz","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"328cb4c9-8a50-4e7a-bc3f-84fa90bfb493"},"owner":"root"},{"ociVersion":"1.2.1","id":"7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","pid":12252,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc/rootfs","created":"2025-11-19T22:23:
05.747249242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-133839_2daa2a824df8c95f73f0a9a59dbe7a36","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner":"root"},{"ociVersion":"1.2.1","id":"87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710","pid":10506,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de8
5710","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710/rootfs","created":"2025-11-19T22:22:40.692429634Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner":"root"},{"ociVersion":"1.2.1","id":"8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","pid":11966,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e061dd59ede876b86e1a7d997
e4443482d81b306fb4e751f2e4e0652b012ec6/rootfs","created":"2025-11-19T22:23:04.904760302Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-133839_0ba3d982eebe7faf07b0096bf84838b5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ba3d982eebe7faf07b0096bf84838b5"},"owner":"root"},{"ociVersion":"1.2.1","id":"98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111","pid":11310,"status":"running","bundle":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111/rootfs","created":"2025-11-19T22:22:53.997448247Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8","io.kubernetes.cri.sandbox-name":"kindnet-mmpvz","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"328cb4c9-8a50-4e7a-bc3f-84fa90bfb493"},"owner":"root"},{"ociVersion":"1.2.1","id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","pid":12261,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","rootfs":"/run/containerd/io.cont
ainerd.runtime.v2.task/k8s.io/a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d/rootfs","created":"2025-11-19T22:23:05.77526698Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-tv8vg_4fa9d59c-bb51-48df-90a7-5d8964136650","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-tv8vg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4fa9d59c-bb51-48df-90a7-5d8964136650"},"owner":"root"},{"ociVersion":"1.2.1","id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","pid":10385,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce/rootfs","created":"2025-11-19T22:22:40.585328115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-133839_2daa2a824df8c95f73f0a9a59dbe7a36","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2daa2a824df8c95f73f0a9a59dbe7a36"},"owner
":"root"},{"ociVersion":"1.2.1","id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","pid":10393,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848/rootfs","created":"2025-11-19T22:22:40.582268242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-133839_7dffba8d778115ced44b1ff92d7a1c7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kub
ernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44b1ff92d7a1c7d"},"owner":"root"},{"ociVersion":"1.2.1","id":"dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee","pid":10469,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee/rootfs","created":"2025-11-19T22:22:40.681427399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","pid":11161,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867/rootfs","created":"2025-11-19T22:22:50.786656045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-133839_a73bb1d93b0096
1144fed68962189df9","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a73bb1d93b00961144fed68962189df9"},"owner":"root"},{"ociVersion":"1.2.1","id":"e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449","pid":12356,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449/rootfs","created":"2025-11-19T22:23:06.071063858Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d","io.kubernetes.cri.sandbox-name":"k
ube-proxy-tv8vg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4fa9d59c-bb51-48df-90a7-5d8964136650"},"owner":"root"},{"ociVersion":"1.2.1","id":"f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115","pid":10500,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115/rootfs","created":"2025-11-19T22:22:40.694516568Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-133839","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dffba8d778115ced44
b1ff92d7a1c7d"},"owner":"root"}]
	I1119 22:23:06.622070  286310 cri.go:126] list returned 16 containers
	I1119 22:23:06.622090  286310 cri.go:129] container: {ID:12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c Status:running}
	I1119 22:23:06.622123  286310 cri.go:131] skipping 12d9ec11a4972f320e48cf8fca8450b4c2a5ff30856118beab31d8c327b60b3c - not in ps
	I1119 22:23:06.622152  286310 cri.go:129] container: {ID:1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535 Status:running}
	I1119 22:23:06.622160  286310 cri.go:131] skipping 1ed7f1f8ffb70b1f7e64c7e1ea227b5db5955a6c016d0854795f217cb90a0535 - not in ps
	I1119 22:23:06.622168  286310 cri.go:129] container: {ID:44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a Status:running}
	I1119 22:23:06.622180  286310 cri.go:131] skipping 44a069aaaa220922fff9974f11af1aeabeb7c8e060df6704a442191e39e0b20a - not in ps
	I1119 22:23:06.622189  286310 cri.go:129] container: {ID:48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf Status:running}
	I1119 22:23:06.622194  286310 cri.go:131] skipping 48b8bb5abe3a6c0776b6c080a8596df219117928b5b203bf5156ace4ce6b61bf - not in ps
	I1119 22:23:06.622203  286310 cri.go:129] container: {ID:4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8 Status:running}
	I1119 22:23:06.622208  286310 cri.go:131] skipping 4fc1258a098ed2efa6fb8a75abb3c0bb075dc11c5ffcf826641549f5b29b4ba8 - not in ps
	I1119 22:23:06.622216  286310 cri.go:129] container: {ID:7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc Status:running}
	I1119 22:23:06.622222  286310 cri.go:131] skipping 7ffa0d1a35c6c1b385439317d3fc27c223ac493687d1f922947c60285eefbcbc - not in ps
	I1119 22:23:06.622230  286310 cri.go:129] container: {ID:87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710 Status:running}
	I1119 22:23:06.622235  286310 cri.go:131] skipping 87b8dd6ca9b813cf37b2ded22cbc31a79567feca4c36e13d251d9b218de85710 - not in ps
	I1119 22:23:06.622242  286310 cri.go:129] container: {ID:8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6 Status:running}
	I1119 22:23:06.622247  286310 cri.go:131] skipping 8e061dd59ede876b86e1a7d997e4443482d81b306fb4e751f2e4e0652b012ec6 - not in ps
	I1119 22:23:06.622251  286310 cri.go:129] container: {ID:98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111 Status:running}
	I1119 22:23:06.622261  286310 cri.go:135] skipping {98d570aaaab466be92c2b69a5dfd5847f0e4ace601469147491780d7fc1a0111 running}: state = "running", want "paused"
	I1119 22:23:06.622270  286310 cri.go:129] container: {ID:a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d Status:running}
	I1119 22:23:06.622281  286310 cri.go:131] skipping a6123471734e200fddf31b912d99e6bc7e2f2c74ed407e9241243453c028db6d - not in ps
	I1119 22:23:06.622285  286310 cri.go:129] container: {ID:bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce Status:running}
	I1119 22:23:06.622291  286310 cri.go:131] skipping bd23568adf1f13eb3c8fad2d246ae2e0c7c065a6bc4eacf802300dbbaef34dce - not in ps
	I1119 22:23:06.622298  286310 cri.go:129] container: {ID:cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848 Status:running}
	I1119 22:23:06.622306  286310 cri.go:131] skipping cca94b63b3b80ac27cb4a496c0a84f300a77511b40c211f086d7f79144b2f848 - not in ps
	I1119 22:23:06.622315  286310 cri.go:129] container: {ID:dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee Status:running}
	I1119 22:23:06.622319  286310 cri.go:131] skipping dedead971ea2a958b7efb611b115ad2b440f8121ad8beb2d700bbd7d5e7411ee - not in ps
	I1119 22:23:06.622327  286310 cri.go:129] container: {ID:e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867 Status:running}
	I1119 22:23:06.622336  286310 cri.go:131] skipping e2559a1a3c6a93297b3cd70a1d33d499072cb3f344f5b04cf590c553b3a97867 - not in ps
	I1119 22:23:06.622344  286310 cri.go:129] container: {ID:e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449 Status:running}
	I1119 22:23:06.622355  286310 cri.go:135] skipping {e6316f8dba2f64d811a9d3ec5d93950941d5e9cd21f83f287f740b63be231449 running}: state = "running", want "paused"
	I1119 22:23:06.622364  286310 cri.go:129] container: {ID:f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115 Status:running}
	I1119 22:23:06.622372  286310 cri.go:131] skipping f5bfa324a5846e1b2244976fcf3019acefb82ba86ec957065470e7ffb9b1a115 - not in ps
	I1119 22:23:06.622418  286310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:23:06.632054  286310 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:23:06.632073  286310 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:23:06.632140  286310 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:23:06.641759  286310 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:23:06.642570  286310 kubeconfig.go:125] found "kubernetes-upgrade-133839" server: "https://192.168.76.2:8443"
	I1119 22:23:06.643563  286310 kapi.go:59] client config for kubernetes-upgrade-133839: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key", CAFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:23:06.644022  286310 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:23:06.644043  286310 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:23:06.644050  286310 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:23:06.644060  286310 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:23:06.644065  286310 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:23:06.644383  286310 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:23:06.653823  286310 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 22:23:06.653865  286310 kubeadm.go:602] duration metric: took 21.785301ms to restartPrimaryControlPlane
	I1119 22:23:06.653875  286310 kubeadm.go:403] duration metric: took 96.575435ms to StartCluster
	I1119 22:23:06.653916  286310 settings.go:142] acquiring lock: {Name:mk3c795849984e82ee99295088dd85252bd75f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:06.653994  286310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:23:06.655154  286310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9296/kubeconfig: {Name:mk5b9093863cb8ca8629eea9fd861356875781d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:23:06.655432  286310 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:23:06.655568  286310 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:23:06.655646  286310 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:06.655681  286310 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-133839"
	I1119 22:23:06.655700  286310 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-133839"
	I1119 22:23:06.655719  286310 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-133839"
	I1119 22:23:06.655702  286310 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-133839"
	W1119 22:23:06.655807  286310 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:23:06.655836  286310 host.go:66] Checking if "kubernetes-upgrade-133839" exists ...
	I1119 22:23:06.656106  286310 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:23:06.656355  286310 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:23:06.658386  286310 out.go:179] * Verifying Kubernetes components...
	I1119 22:23:06.659906  286310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:06.683464  286310 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:23:06.685568  286310 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:23:06.685593  286310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:23:06.685650  286310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-133839
	I1119 22:23:06.686096  286310 kapi.go:59] client config for kubernetes-upgrade-133839: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key", CAFile:"/home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:23:06.686461  286310 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-133839"
	W1119 22:23:06.686485  286310 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:23:06.686515  286310 host.go:66] Checking if "kubernetes-upgrade-133839" exists ...
	I1119 22:23:06.687010  286310 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-133839 --format={{.State.Status}}
	I1119 22:23:06.715972  286310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/kubernetes-upgrade-133839/id_rsa Username:docker}
	I1119 22:23:06.718472  286310 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:23:06.718493  286310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:23:06.718552  286310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-133839
	I1119 22:23:06.745363  286310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/kubernetes-upgrade-133839/id_rsa Username:docker}
	I1119 22:23:06.839021  286310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:23:06.856232  286310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:06.872283  286310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:23:07.655934  286310 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:23:07.656013  286310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:23:07.681629  286310 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:23:05.774287  289819 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 22:23:05.774309  289819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 22:23:05.774385  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.796967  289819 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:23:05.797056  289819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:23:05.797160  289819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-982287
	I1119 22:23:05.803368  289819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:23:05.805752  289819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:23:05.823043  289819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:23:05.831702  289819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/newest-cni-982287/id_rsa Username:docker}
	I1119 22:23:05.986571  289819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:23:06.013987  289819 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:23:06.014105  289819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:23:06.028640  289819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:23:06.038093  289819 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 22:23:06.038121  289819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1119 22:23:06.044976  289819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:23:06.049599  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:23:06.049621  289819 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:23:06.083352  289819 api_server.go:72] duration metric: took 365.128777ms to wait for apiserver process to appear ...
	I1119 22:23:06.083377  289819 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:23:06.083396  289819 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:23:06.090908  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:23:06.090935  289819 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:23:06.115453  289819 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 22:23:06.115482  289819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 22:23:06.166404  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:23:06.166432  289819 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:23:06.167613  289819 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 22:23:06.167632  289819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 22:23:06.206350  289819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 22:23:06.219774  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:23:06.219806  289819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:23:06.258038  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:23:06.258068  289819 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:23:06.279395  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:23:06.279424  289819 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:23:06.298814  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:23:06.298842  289819 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:23:06.317369  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:23:06.317394  289819 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:23:06.334765  289819 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:23:06.334789  289819 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:23:06.351994  289819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:23:07.684278  286310 addons.go:515] duration metric: took 1.028714314s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:23:07.687023  286310 api_server.go:72] duration metric: took 1.03155518s to wait for apiserver process to appear ...
	I1119 22:23:07.687050  286310 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:23:07.687093  286310 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:23:07.695838  286310 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:23:07.697137  286310 api_server.go:141] control plane version: v1.34.1
	I1119 22:23:07.697168  286310 api_server.go:131] duration metric: took 10.109955ms to wait for apiserver health ...
	I1119 22:23:07.697178  286310 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:23:07.701315  286310 system_pods.go:59] 9 kube-system pods found
	I1119 22:23:07.701347  286310 system_pods.go:61] "coredns-66bc5c9577-fqplw" [db318af7-eae3-4384-96f2-081699e9db3a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:23:07.701357  286310 system_pods.go:61] "coredns-66bc5c9577-hrldj" [b377a23c-97ac-47ec-9c4d-457cb83ffb2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:23:07.701368  286310 system_pods.go:61] "etcd-kubernetes-upgrade-133839" [4b7e70b8-bddd-4dd9-8bdf-8dd86b3aa490] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:23:07.701375  286310 system_pods.go:61] "kindnet-mmpvz" [328cb4c9-8a50-4e7a-bc3f-84fa90bfb493] Running
	I1119 22:23:07.701383  286310 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-133839" [ba70c25d-5c49-416e-a497-183ff447e341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:23:07.701393  286310 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-133839" [92521eba-8aef-4ff7-8792-c8afdbb49ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:23:07.701402  286310 system_pods.go:61] "kube-proxy-tv8vg" [4fa9d59c-bb51-48df-90a7-5d8964136650] Running
	I1119 22:23:07.701415  286310 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-133839" [697d39c2-e8e1-4e8b-b712-ad9e4562393a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:23:07.701424  286310 system_pods.go:61] "storage-provisioner" [cc158467-51a4-4721-8417-296bd706c51b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:23:07.701433  286310 system_pods.go:74] duration metric: took 4.247535ms to wait for pod list to return data ...
	I1119 22:23:07.701449  286310 kubeadm.go:587] duration metric: took 1.045984565s to wait for: map[apiserver:true system_pods:true]
	I1119 22:23:07.701462  286310 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:23:07.704651  286310 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:23:07.704677  286310 node_conditions.go:123] node cpu capacity is 8
	I1119 22:23:07.704691  286310 node_conditions.go:105] duration metric: took 3.166288ms to run NodePressure ...
	I1119 22:23:07.704705  286310 start.go:242] waiting for startup goroutines ...
	I1119 22:23:07.704716  286310 start.go:247] waiting for cluster config update ...
	I1119 22:23:07.704737  286310 start.go:256] writing updated cluster config ...
	I1119 22:23:07.705027  286310 ssh_runner.go:195] Run: rm -f paused
	I1119 22:23:07.772927  286310 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:23:07.774527  286310 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-133839" cluster and "default" namespace by default
	I1119 22:23:03.454024  291097 out.go:252] * Restarting existing docker container for "embed-certs-299509" ...
	I1119 22:23:03.454108  291097 cli_runner.go:164] Run: docker start embed-certs-299509
	I1119 22:23:03.870757  291097 cli_runner.go:164] Run: docker container inspect embed-certs-299509 --format={{.State.Status}}
	I1119 22:23:03.896614  291097 kic.go:430] container "embed-certs-299509" state is running.
	I1119 22:23:03.930118  291097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-299509
	I1119 22:23:03.956788  291097 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/embed-certs-299509/config.json ...
	I1119 22:23:04.023973  291097 machine.go:94] provisionDockerMachine start ...
	I1119 22:23:04.024090  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:04.051076  291097 main.go:143] libmachine: Using SSH client type: native
	I1119 22:23:04.051515  291097 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 22:23:04.051538  291097 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:23:04.052382  291097 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37352->127.0.0.1:33098: read: connection reset by peer
	I1119 22:23:07.222277  291097 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-299509
	
	I1119 22:23:07.222316  291097 ubuntu.go:182] provisioning hostname "embed-certs-299509"
	I1119 22:23:07.222391  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:07.249921  291097 main.go:143] libmachine: Using SSH client type: native
	I1119 22:23:07.250198  291097 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 22:23:07.250218  291097 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-299509 && echo "embed-certs-299509" | sudo tee /etc/hostname
	I1119 22:23:07.415261  291097 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-299509
	
	I1119 22:23:07.415347  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:07.441335  291097 main.go:143] libmachine: Using SSH client type: native
	I1119 22:23:07.441659  291097 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 22:23:07.441681  291097 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299509/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:23:07.599245  291097 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:23:07.599285  291097 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9296/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9296/.minikube}
	I1119 22:23:07.599309  291097 ubuntu.go:190] setting up certificates
	I1119 22:23:07.599329  291097 provision.go:84] configureAuth start
	I1119 22:23:07.599385  291097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-299509
	I1119 22:23:07.628903  291097 provision.go:143] copyHostCerts
	I1119 22:23:07.629064  291097 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem, removing ...
	I1119 22:23:07.629107  291097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem
	I1119 22:23:07.629240  291097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/ca.pem (1078 bytes)
	I1119 22:23:07.629417  291097 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem, removing ...
	I1119 22:23:07.629467  291097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem
	I1119 22:23:07.629555  291097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/cert.pem (1123 bytes)
	I1119 22:23:07.629703  291097 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem, removing ...
	I1119 22:23:07.629728  291097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem
	I1119 22:23:07.629775  291097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9296/.minikube/key.pem (1679 bytes)
	I1119 22:23:07.629854  291097 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299509 san=[127.0.0.1 192.168.94.2 embed-certs-299509 localhost minikube]
	I1119 22:23:07.919644  291097 provision.go:177] copyRemoteCerts
	I1119 22:23:07.919715  291097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:23:07.919779  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:07.946598  291097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/embed-certs-299509/id_rsa Username:docker}
	I1119 22:23:08.057470  291097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:23:08.081475  291097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:23:08.101926  291097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:23:08.129494  291097 provision.go:87] duration metric: took 530.153658ms to configureAuth
	I1119 22:23:08.129524  291097 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:23:08.129740  291097 config.go:182] Loaded profile config "embed-certs-299509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:23:08.129762  291097 machine.go:97] duration metric: took 4.105763554s to provisionDockerMachine
	I1119 22:23:08.129773  291097 start.go:293] postStartSetup for "embed-certs-299509" (driver="docker")
	I1119 22:23:08.129787  291097 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:23:08.129837  291097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:23:08.129923  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:08.231703  289819 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:23:08.231733  289819 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:23:08.231752  289819 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:23:08.300196  289819 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:23:08.300234  289819 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:23:08.402041  289819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.373362565s)
	I1119 22:23:08.584105  289819 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:23:08.591558  289819 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:23:08.591589  289819 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:23:09.083581  289819 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:23:09.089726  289819 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:23:09.089755  289819 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:23:09.122453  289819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.077440419s)
	I1119 22:23:09.174534  289819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.968130508s)
	I1119 22:23:09.174583  289819 addons.go:480] Verifying addon metrics-server=true in "newest-cni-982287"
	I1119 22:23:09.174622  289819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.822580941s)
	I1119 22:23:09.176722  289819 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-982287 addons enable metrics-server
	
	I1119 22:23:09.178200  289819 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1119 22:23:08.164823  291097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/embed-certs-299509/id_rsa Username:docker}
	I1119 22:23:08.281215  291097 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:23:08.290274  291097 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:23:08.290310  291097 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:23:08.290324  291097 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/addons for local assets ...
	I1119 22:23:08.290378  291097 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9296/.minikube/files for local assets ...
	I1119 22:23:08.290497  291097 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem -> 128212.pem in /etc/ssl/certs
	I1119 22:23:08.290638  291097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:23:08.305289  291097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/ssl/certs/128212.pem --> /etc/ssl/certs/128212.pem (1708 bytes)
	I1119 22:23:08.342772  291097 start.go:296] duration metric: took 212.982067ms for postStartSetup
	I1119 22:23:08.342859  291097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:23:08.342928  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:08.373041  291097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/embed-certs-299509/id_rsa Username:docker}
	I1119 22:23:08.484427  291097 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:23:08.491833  291097 fix.go:56] duration metric: took 5.065485551s for fixHost
	I1119 22:23:08.491858  291097 start.go:83] releasing machines lock for "embed-certs-299509", held for 5.065537163s
	I1119 22:23:08.491946  291097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-299509
	I1119 22:23:08.521684  291097 ssh_runner.go:195] Run: cat /version.json
	I1119 22:23:08.521740  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:08.521777  291097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:23:08.521853  291097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-299509
	I1119 22:23:08.549878  291097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/embed-certs-299509/id_rsa Username:docker}
	I1119 22:23:08.559389  291097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/embed-certs-299509/id_rsa Username:docker}
	I1119 22:23:08.750544  291097 ssh_runner.go:195] Run: systemctl --version
	I1119 22:23:08.758593  291097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:23:08.765031  291097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:23:08.765107  291097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:23:08.777022  291097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:23:08.777047  291097 start.go:496] detecting cgroup driver to use...
	I1119 22:23:08.777080  291097 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:23:08.777159  291097 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:23:08.803238  291097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:23:08.824388  291097 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:23:08.824469  291097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:23:08.859522  291097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:23:08.885730  291097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:23:09.038988  291097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:23:09.155587  291097 docker.go:234] disabling docker service ...
	I1119 22:23:09.155654  291097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:23:09.174741  291097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:23:09.192195  291097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:23:09.341404  291097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:23:09.456466  291097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:23:09.474914  291097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:23:09.497492  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:23:09.510560  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:23:09.523414  291097 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 22:23:09.523512  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 22:23:09.536638  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:09.550078  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:23:09.560032  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:23:09.572513  291097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:23:09.583482  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:23:09.594615  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:23:09.610296  291097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:23:09.622549  291097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:23:09.632653  291097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:23:09.642522  291097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:23:09.739236  291097 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:23:09.898557  291097 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:23:09.898810  291097 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:23:09.904417  291097 start.go:564] Will wait 60s for crictl version
	I1119 22:23:09.904480  291097 ssh_runner.go:195] Run: which crictl
	I1119 22:23:09.910841  291097 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:23:09.945095  291097 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:23:09.945161  291097 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:09.971306  291097 ssh_runner.go:195] Run: containerd --version
	I1119 22:23:10.006383  291097 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4d83a706b7766       56cc512116c8f       10 seconds ago      Running             busybox                   0                   6f63218b464b9       busybox                                                default
	810caa6ef2edc       52546a367cc9e       16 seconds ago      Running             coredns                   0                   aca741cb294cb       coredns-66bc5c9577-f5cqw                               kube-system
	a1d8eb49da113       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   ce6b6c5254d45       storage-provisioner                                    kube-system
	72a01e4ba7db0       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   1fbce5df09294       kindnet-ml6h4                                          kube-system
	7cb9dc7e8e5c6       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   0c00ab6626f2b       kube-proxy-r2sgg                                       kube-system
	eb1659c62a6af       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   d7204ee6217df       kube-scheduler-default-k8s-diff-port-409240            kube-system
	2f7c6aef7e56e       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   d8e6cb3eb629e       kube-apiserver-default-k8s-diff-port-409240            kube-system
	5167af3d80ffd       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   d79dead10e2ce       kube-controller-manager-default-k8s-diff-port-409240   kube-system
	d38b3d9548d61       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   ef19fe6ed2ca3       etcd-default-k8s-diff-port-409240                      kube-system
	
	
	==> containerd <==
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.322787744Z" level=info msg="StartContainer for \"a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.325307164Z" level=info msg="connecting to shim a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546" address="unix:///run/containerd/s/6dcae8d39a0b5dcfd30cd1013c5df08dd04f4758fcbb35fe45c2446c7b042307" protocol=ttrpc version=3
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.327050450Z" level=info msg="CreateContainer within sandbox \"aca741cb294cbae9a1df7d3b32e570c8c906aecf9bd3edf4f7ba815f02c6ffec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.335556765Z" level=info msg="Container 810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.343834979Z" level=info msg="CreateContainer within sandbox \"aca741cb294cbae9a1df7d3b32e570c8c906aecf9bd3edf4f7ba815f02c6ffec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.344380569Z" level=info msg="StartContainer for \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\""
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.345396401Z" level=info msg="connecting to shim 810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85" address="unix:///run/containerd/s/2fc39d63b116a1ff20c366ca1d4d88883d14ac0100f4f47879a1bdae9ebd425a" protocol=ttrpc version=3
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.395284737Z" level=info msg="StartContainer for \"a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546\" returns successfully"
	Nov 19 22:22:54 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:54.409784018Z" level=info msg="StartContainer for \"810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85\" returns successfully"
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.102585147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:eee884c0-8976-48f4-8b93-86a4bc150754,Namespace:default,Attempt:0,}"
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.155603467Z" level=info msg="connecting to shim 6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2" address="unix:///run/containerd/s/af761d2e20de794dbb47f057a60c2e52887f37a0b3b075c22124a2598aabd4a5" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.240723137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:eee884c0-8976-48f4-8b93-86a4bc150754,Namespace:default,Attempt:0,} returns sandbox id \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\""
	Nov 19 22:22:58 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:22:58.243400864Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.390779692Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.391636689Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.392809709Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.396974406Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.397508336Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.15405943s"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.397552872Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.402481573Z" level=info msg="CreateContainer within sandbox \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.410068998Z" level=info msg="Container 4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.417619619Z" level=info msg="CreateContainer within sandbox \"6f63218b464b9df3440957b974563f4966a8f62c4cfeb3f03c0403fd71fb70a2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.418313393Z" level=info msg="StartContainer for \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\""
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.419258335Z" level=info msg="connecting to shim 4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e" address="unix:///run/containerd/s/af761d2e20de794dbb47f057a60c2e52887f37a0b3b075c22124a2598aabd4a5" protocol=ttrpc version=3
	Nov 19 22:23:00 default-k8s-diff-port-409240 containerd[663]: time="2025-11-19T22:23:00.481717143Z" level=info msg="StartContainer for \"4d83a706b77663920f27b17bc399c18f9b4a80e7f0036883ff9002c4755b617e\" returns successfully"
	
	
	==> coredns [810caa6ef2edc83a8b8a5856884b2ad886cb3c6cb49581d5756efe178ccdff85] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46246 - 40558 "HINFO IN 3435086917380568170.3234967037515506881. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066971238s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-409240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-409240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-409240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_22_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-409240
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:23:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:23:06 +0000   Wed, 19 Nov 2025 22:22:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-409240
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                25b2f9d9-4024-4506-99ca-57d79a4aba10
	  Boot ID:                    f21fb8e8-9754-4dc5-a8d9-ce41ba5f6057
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-f5cqw                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-default-k8s-diff-port-409240                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-ml6h4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-default-k8s-diff-port-409240             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-409240    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-r2sgg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-default-k8s-diff-port-409240             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node default-k8s-diff-port-409240 event: Registered Node default-k8s-diff-port-409240 in Controller
	  Normal  NodeReady                17s                kubelet          Node default-k8s-diff-port-409240 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.089012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.424964] i8042: Warning: Keylock active
	[  +0.011946] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499038] block sda: the capability attribute has been deprecated.
	[  +0.090446] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026259] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.862736] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d38b3d9548d6178a52dc3b1ff81520bb354f6add1ea3feaae5043525a24acf02] <==
	{"level":"warn","ts":"2025-11-19T22:22:33.009471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.017434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.034076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.043421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.052425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.059739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.068032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.076401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.095998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.105855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.114236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:22:33.184838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49796","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:22:36.356589Z","caller":"traceutil/trace.go:172","msg":"trace[1531196741] transaction","detail":"{read_only:false; response_revision:264; number_of_response:1; }","duration":"104.72599ms","start":"2025-11-19T22:22:36.251842Z","end":"2025-11-19T22:22:36.356568Z","steps":["trace[1531196741] 'process raft request'  (duration: 104.203074ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:43.414526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.669121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-ml6h4\" limit:1 ","response":"range_response_count:1 size:5340"}
	{"level":"info","ts":"2025-11-19T22:22:43.414632Z","caller":"traceutil/trace.go:172","msg":"trace[672105413] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-ml6h4; range_end:; response_count:1; response_revision:418; }","duration":"119.788443ms","start":"2025-11-19T22:22:43.294827Z","end":"2025-11-19T22:22:43.414615Z","steps":["trace[672105413] 'range keys from in-memory index tree'  (duration: 119.537649ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:53.466823Z","caller":"traceutil/trace.go:172","msg":"trace[551475231] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"228.146488ms","start":"2025-11-19T22:22:53.238653Z","end":"2025-11-19T22:22:53.466800Z","steps":["trace[551475231] 'process raft request'  (duration: 227.952842ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:22:53.655671Z","caller":"traceutil/trace.go:172","msg":"trace[2096458719] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:445; }","duration":"170.449429ms","start":"2025-11-19T22:22:53.485201Z","end":"2025-11-19T22:22:53.655650Z","steps":["trace[2096458719] 'read index received'  (duration: 170.436538ms)","trace[2096458719] 'applied index is now lower than readState.Index'  (duration: 11.568µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:53.697970Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.750546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-409240\" limit:1 ","response":"range_response_count:1 size:4648"}
	{"level":"info","ts":"2025-11-19T22:22:53.698040Z","caller":"traceutil/trace.go:172","msg":"trace[787510816] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-409240; range_end:; response_count:1; response_revision:432; }","duration":"212.834189ms","start":"2025-11-19T22:22:53.485191Z","end":"2025-11-19T22:22:53.698025Z","steps":["trace[787510816] 'agreement among raft nodes before linearized reading'  (duration: 170.561877ms)","trace[787510816] 'range keys from in-memory index tree'  (duration: 42.030043ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:22:53.698077Z","caller":"traceutil/trace.go:172","msg":"trace[1227998274] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"453.483537ms","start":"2025-11-19T22:22:53.244572Z","end":"2025-11-19T22:22:53.698056Z","steps":["trace[1227998274] 'process raft request'  (duration: 411.115494ms)","trace[1227998274] 'compare'  (duration: 42.246072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:22:53.698666Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:22:53.244547Z","time spent":"453.652917ms","remote":"127.0.0.1:49052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4564,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-409240\" mod_revision:356 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-409240\" value_size:4510 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-409240\" > >"}
	{"level":"warn","ts":"2025-11-19T22:22:55.910940Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.881407ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:22:55.911042Z","caller":"traceutil/trace.go:172","msg":"trace[419586803] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:456; }","duration":"120.002318ms","start":"2025-11-19T22:22:55.791025Z","end":"2025-11-19T22:22:55.911027Z","steps":["trace[419586803] 'range keys from in-memory index tree'  (duration: 119.8203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:22:55.911160Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.5529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:22:55.911191Z","caller":"traceutil/trace.go:172","msg":"trace[1202920222] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:456; }","duration":"256.590275ms","start":"2025-11-19T22:22:55.654592Z","end":"2025-11-19T22:22:55.911182Z","steps":["trace[1202920222] 'range keys from in-memory index tree'  (duration: 256.493047ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:23:10 up  1:05,  0 user,  load average: 4.85, 3.81, 2.45
	Linux default-k8s-diff-port-409240 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72a01e4ba7db0cbbf56094887293f9bd55e892f2efc3c5f638add5dd05a0771d] <==
	I1119 22:22:42.770285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:22:42.770685       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:22:42.770828       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:22:42.770845       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:22:42.770858       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:22:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:22:43.059265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:22:43.059336       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:22:43.059353       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:22:43.059579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:22:43.459466       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:22:43.459492       1 metrics.go:72] Registering metrics
	I1119 22:22:43.459542       1 controller.go:711] "Syncing nftables rules"
	I1119 22:22:53.062961       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:22:53.063027       1 main.go:301] handling current node
	I1119 22:23:03.059952       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:23:03.059992       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f7c6aef7e56e549c365f54517c771a4d1d1d70e8fbebb03436e2207659e9842] <==
	I1119 22:22:33.732244       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:22:33.737153       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:33.738196       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:22:33.740273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:22:33.746142       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:33.746388       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:22:33.782669       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:22:34.733984       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:22:34.777955       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:22:34.777981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:22:35.401694       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:22:35.444799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:22:35.539293       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:22:35.545289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:22:35.546543       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:22:35.550522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:22:36.385678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:22:36.396160       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:22:36.413380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:22:36.423969       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:22:41.652154       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:22:42.252173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:22:42.308563       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:22:42.318305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:23:06.964768       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:39034: use of closed network connection
	
	
	==> kube-controller-manager [5167af3d80ffdb05096166d9330cad0299100bafcb9e0af013f17c31936a27c7] <==
	I1119 22:22:41.396000       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:41.396024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:22:41.396032       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:22:41.396524       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:22:41.396540       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:22:41.396572       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:22:41.396705       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:22:41.396961       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:22:41.397196       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:22:41.397553       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:22:41.397230       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:22:41.397694       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:22:41.397680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:22:41.397213       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:22:41.399426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:22:41.402055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:41.402755       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:22:41.406720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:22:41.406838       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:22:41.407233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-409240"
	I1119 22:22:41.407289       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:22:41.422505       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:22:41.439693       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:22:41.454097       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:22:56.409707       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7cb9dc7e8e5c6d569542be180135a6b54fa081a5e0d488813e3772ad7d8749b8] <==
	I1119 22:22:42.295142       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:22:42.357439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:22:42.457715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:22:42.457752       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:22:42.457834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:22:42.485008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:22:42.485078       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:22:42.491736       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:22:42.492195       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:22:42.492225       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:22:42.494035       1 config.go:200] "Starting service config controller"
	I1119 22:22:42.497010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:22:42.494625       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:22:42.497067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:22:42.494637       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:22:42.497081       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:22:42.494278       1 config.go:309] "Starting node config controller"
	I1119 22:22:42.497092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:22:42.497293       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:22:42.598645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:22:42.598688       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:22:42.598728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb1659c62a6af8497a707325868baae27ff17f0f302531a2e636ac52a83637e0] <==
	E1119 22:22:33.705705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:33.705737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:22:33.705821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:22:33.705866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:33.705871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:22:33.706031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:33.706626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:33.708738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:34.508115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:22:34.510102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:22:34.555481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:22:34.565837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:22:34.676508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:22:34.733477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:22:34.811547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:22:34.924193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:22:34.946568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:22:34.986400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:22:35.025070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:22:35.031350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:22:35.044773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:22:35.070125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:22:35.103437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:22:35.140810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:22:36.902024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.274922    1446 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: E1119 22:22:37.285660    1446 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-409240\" already exists" pod="kube-system/etcd-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: E1119 22:22:37.286760    1446 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-409240\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-409240"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.300807    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-409240" podStartSLOduration=1.300783788 podStartE2EDuration="1.300783788s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:37.300653374 +0000 UTC m=+1.151940170" watchObservedRunningTime="2025-11-19 22:22:37.300783788 +0000 UTC m=+1.152070579"
	Nov 19 22:22:37 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:37.300928    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-409240" podStartSLOduration=1.300922094 podStartE2EDuration="1.300922094s" podCreationTimestamp="2025-11-19 22:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:37.286011705 +0000 UTC m=+1.137298498" watchObservedRunningTime="2025-11-19 22:22:37.300922094 +0000 UTC m=+1.152208885"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.405721    1446 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.406652    1446 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778058    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmxq\" (UniqueName: \"kubernetes.io/projected/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-kube-api-access-jtmxq\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778110    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b687585b-f9cc-4321-9055-9b5a448fd38f-kube-proxy\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778137    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b687585b-f9cc-4321-9055-9b5a448fd38f-lib-modules\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778163    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-cni-cfg\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778194    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-xtables-lock\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778221    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b8c2bb4-299d-40e7-af2f-7313f2ba0437-lib-modules\") pod \"kindnet-ml6h4\" (UID: \"2b8c2bb4-299d-40e7-af2f-7313f2ba0437\") " pod="kube-system/kindnet-ml6h4"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778250    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b687585b-f9cc-4321-9055-9b5a448fd38f-xtables-lock\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:41 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:41.778272    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jlb\" (UniqueName: \"kubernetes.io/projected/b687585b-f9cc-4321-9055-9b5a448fd38f-kube-api-access-57jlb\") pod \"kube-proxy-r2sgg\" (UID: \"b687585b-f9cc-4321-9055-9b5a448fd38f\") " pod="kube-system/kube-proxy-r2sgg"
	Nov 19 22:22:42 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:42.308588    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r2sgg" podStartSLOduration=1.308561977 podStartE2EDuration="1.308561977s" podCreationTimestamp="2025-11-19 22:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:42.308407355 +0000 UTC m=+6.159694147" watchObservedRunningTime="2025-11-19 22:22:42.308561977 +0000 UTC m=+6.159848767"
	Nov 19 22:22:43 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:43.510959    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ml6h4" podStartSLOduration=2.510934008 podStartE2EDuration="2.510934008s" podCreationTimestamp="2025-11-19 22:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:43.49638496 +0000 UTC m=+7.347671750" watchObservedRunningTime="2025-11-19 22:22:43.510934008 +0000 UTC m=+7.362220798"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.236317    1446 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868092    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8cc\" (UniqueName: \"kubernetes.io/projected/df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2-kube-api-access-kn8cc\") pod \"coredns-66bc5c9577-f5cqw\" (UID: \"df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2\") " pod="kube-system/coredns-66bc5c9577-f5cqw"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868164    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d31d0ff-61d4-4948-a718-08c43b520656-tmp\") pod \"storage-provisioner\" (UID: \"2d31d0ff-61d4-4948-a718-08c43b520656\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868206    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bvw\" (UniqueName: \"kubernetes.io/projected/2d31d0ff-61d4-4948-a718-08c43b520656-kube-api-access-v7bvw\") pod \"storage-provisioner\" (UID: \"2d31d0ff-61d4-4948-a718-08c43b520656\") " pod="kube-system/storage-provisioner"
	Nov 19 22:22:53 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:53.868955    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2-config-volume\") pod \"coredns-66bc5c9577-f5cqw\" (UID: \"df694cd4-aa56-4ee9-a15c-3c72c9bcb9c2\") " pod="kube-system/coredns-66bc5c9577-f5cqw"
	Nov 19 22:22:55 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:55.341616    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.341591667 podStartE2EDuration="12.341591667s" podCreationTimestamp="2025-11-19 22:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:55.341280911 +0000 UTC m=+19.192567784" watchObservedRunningTime="2025-11-19 22:22:55.341591667 +0000 UTC m=+19.192878457"
	Nov 19 22:22:55 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:55.356935    1446 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f5cqw" podStartSLOduration=13.356855823 podStartE2EDuration="13.356855823s" podCreationTimestamp="2025-11-19 22:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:22:55.356508044 +0000 UTC m=+19.207794846" watchObservedRunningTime="2025-11-19 22:22:55.356855823 +0000 UTC m=+19.208142613"
	Nov 19 22:22:57 default-k8s-diff-port-409240 kubelet[1446]: I1119 22:22:57.900538    1446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t44\" (UniqueName: \"kubernetes.io/projected/eee884c0-8976-48f4-8b93-86a4bc150754-kube-api-access-p4t44\") pod \"busybox\" (UID: \"eee884c0-8976-48f4-8b93-86a4bc150754\") " pod="default/busybox"
	
	
	==> storage-provisioner [a1d8eb49da11368af8baafedb6697768131e7a87fc151cc41099221841ba7546] <==
	I1119 22:22:54.417256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:22:54.420477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:54.429346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:54.429577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:22:54.429658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c37bf72e-2310-4ea3-bd14-d23e7de696c3", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a became leader
	I1119 22:22:54.429762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a!
	W1119 22:22:54.434715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:54.440782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:22:54.530327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409240_872b4934-0d00-4efe-9d23-b6c75348de0a!
	W1119 22:22:56.445806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:56.452525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:58.456903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:22:58.461554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:00.465419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:00.469974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:02.473629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:02.479458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:04.484788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:04.494725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:06.499032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:06.502939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:08.508315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:08.516127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:10.521156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:23:10.526409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.61s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 16.48
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11.25
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.83
22 TestOffline 50.84
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 125.83
29 TestAddons/serial/Volcano 40.24
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 15.49
36 TestAddons/parallel/RegistryCreds 0.72
37 TestAddons/parallel/Ingress 20.79
38 TestAddons/parallel/InspektorGadget 10.66
39 TestAddons/parallel/MetricsServer 6.62
41 TestAddons/parallel/CSI 30.63
42 TestAddons/parallel/Headlamp 17.42
43 TestAddons/parallel/CloudSpanner 5.49
44 TestAddons/parallel/LocalPath 55.61
45 TestAddons/parallel/NvidiaDevicePlugin 5.48
46 TestAddons/parallel/Yakd 10.69
47 TestAddons/parallel/AmdGpuDevicePlugin 5.54
48 TestAddons/StoppedEnableDisable 12.31
49 TestCertOptions 24.96
50 TestCertExpiration 210.52
52 TestForceSystemdFlag 24.51
53 TestForceSystemdEnv 30.32
54 TestDockerEnvContainerd 38.02
58 TestErrorSpam/setup 20.1
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.5
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.61
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.83
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
75 TestFunctional/serial/CacheCmd/cache/add_local 1.92
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 44.04
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.22
86 TestFunctional/serial/LogsFileCmd 1.26
87 TestFunctional/serial/InvalidService 4.45
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 10.78
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 19.69
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 31.37
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 1.83
103 TestFunctional/parallel/MySQL 20.23
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.82
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.48
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.74
121 TestFunctional/parallel/ImageCommands/Setup 1.73
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
127 TestFunctional/parallel/ProfileCmd/profile_list 0.48
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.26
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.02
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
147 TestFunctional/parallel/MountCmd/any-port 7.76
148 TestFunctional/parallel/ServiceCmd/List 0.96
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.99
150 TestFunctional/parallel/MountCmd/specific-port 2.17
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
152 TestFunctional/parallel/ServiceCmd/Format 0.56
153 TestFunctional/parallel/ServiceCmd/URL 0.6
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 140.98
163 TestMultiControlPlane/serial/DeployApp 5.35
164 TestMultiControlPlane/serial/PingHostFromPods 1.17
165 TestMultiControlPlane/serial/AddWorkerNode 24.53
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 16.9
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.71
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 93.69
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.49
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 36.25
177 TestMultiControlPlane/serial/RestartCluster 57.91
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 41.84
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 39.22
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.76
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.87
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 44.82
211 TestKicCustomNetwork/use_default_bridge_network 22.56
212 TestKicExistingNetwork 23.1
213 TestKicCustomSubnet 24.3
214 TestKicStaticIP 26.6
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 51.59
219 TestMountStart/serial/StartWithMountFirst 4.53
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.52
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.87
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 62.87
231 TestMultiNode/serial/DeployApp2Nodes 5.02
232 TestMultiNode/serial/PingHostFrom2Pods 0.8
233 TestMultiNode/serial/AddNode 23.63
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.82
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.2
239 TestMultiNode/serial/RestartKeepsNodes 72.12
240 TestMultiNode/serial/DeleteNode 5.26
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 44.48
243 TestMultiNode/serial/ValidateNameConflict 23.59
248 TestPreload 109.69
250 TestScheduledStopUnix 97.73
253 TestInsufficientStorage 9.7
254 TestRunningBinaryUpgrade 103.85
256 TestKubernetesUpgrade 331.51
257 TestMissingContainerUpgrade 90.78
259 TestPause/serial/Start 50.16
260 TestStoppedBinaryUpgrade/Setup 2.66
261 TestStoppedBinaryUpgrade/Upgrade 105.4
262 TestPause/serial/SecondStartNoReconfiguration 7.15
263 TestPause/serial/Pause 1.82
264 TestPause/serial/VerifyStatus 0.39
265 TestPause/serial/Unpause 0.91
266 TestPause/serial/PauseAgain 0.74
267 TestPause/serial/DeletePaused 4.84
268 TestPause/serial/VerifyDeletedResources 0.78
276 TestStoppedBinaryUpgrade/MinikubeLogs 4.52
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
279 TestNoKubernetes/serial/StartWithK8s 23.11
283 TestNoKubernetes/serial/StartWithStopK8s 8.72
288 TestNetworkPlugins/group/false 4
292 TestNoKubernetes/serial/Start 7.54
293 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
295 TestNoKubernetes/serial/ProfileList 15.63
296 TestNoKubernetes/serial/Stop 1.28
297 TestNoKubernetes/serial/StartNoArgs 7.01
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
300 TestStartStop/group/old-k8s-version/serial/FirstStart 53.27
302 TestStartStop/group/no-preload/serial/FirstStart 53.52
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
305 TestStartStop/group/old-k8s-version/serial/Stop 12.09
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
308 TestStartStop/group/old-k8s-version/serial/SecondStart 46.85
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
310 TestStartStop/group/no-preload/serial/Stop 12.04
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 50.57
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
316 TestStartStop/group/old-k8s-version/serial/Pause 2.8
318 TestStartStop/group/embed-certs/serial/FirstStart 40.67
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
322 TestStartStop/group/no-preload/serial/Pause 2.98
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.74
326 TestStartStop/group/newest-cni/serial/FirstStart 30.46
328 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
329 TestStartStop/group/embed-certs/serial/Stop 12.65
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.47
332 TestStartStop/group/newest-cni/serial/Stop 1.37
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
335 TestStartStop/group/newest-cni/serial/SecondStart 13.52
336 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
337 TestStartStop/group/embed-certs/serial/SecondStart 51.36
338 TestNetworkPlugins/group/auto/Start 43.29
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
342 TestStartStop/group/newest-cni/serial/Pause 3.76
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.78
345 TestNetworkPlugins/group/kindnet/Start 42.85
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
347 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.6
348 TestNetworkPlugins/group/auto/KubeletFlags 0.31
349 TestNetworkPlugins/group/auto/NetCatPod 8.22
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/auto/DNS 0.13
354 TestNetworkPlugins/group/auto/Localhost 0.1
355 TestNetworkPlugins/group/auto/HairPin 0.12
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
357 TestStartStop/group/embed-certs/serial/Pause 2.89
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
360 TestNetworkPlugins/group/calico/Start 53.42
361 TestNetworkPlugins/group/kindnet/DNS 0.16
362 TestNetworkPlugins/group/kindnet/Localhost 0.12
363 TestNetworkPlugins/group/kindnet/HairPin 0.13
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
365 TestNetworkPlugins/group/custom-flannel/Start 53.66
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.49
369 TestNetworkPlugins/group/enable-default-cni/Start 68.6
370 TestNetworkPlugins/group/flannel/Start 60.13
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.3
373 TestNetworkPlugins/group/calico/NetCatPod 10.22
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
376 TestNetworkPlugins/group/calico/DNS 0.13
377 TestNetworkPlugins/group/calico/Localhost 0.12
378 TestNetworkPlugins/group/calico/HairPin 0.12
379 TestNetworkPlugins/group/custom-flannel/DNS 0.13
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 61.11
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
385 TestNetworkPlugins/group/flannel/NetCatPod 10.21
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.24
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
391 TestNetworkPlugins/group/flannel/DNS 0.15
392 TestNetworkPlugins/group/flannel/Localhost 0.12
393 TestNetworkPlugins/group/flannel/HairPin 0.12
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
395 TestNetworkPlugins/group/bridge/NetCatPod 9.17
396 TestNetworkPlugins/group/bridge/DNS 0.13
397 TestNetworkPlugins/group/bridge/Localhost 0.11
398 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (16.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-785084 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-785084 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.479345739s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (16.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 21:47:25.339654   12821 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1119 21:47:25.339754   12821 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-785084
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-785084: exit status 85 (75.14251ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-785084 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-785084 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:08
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:08.909061   12832 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:08.909290   12832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:08.909298   12832 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:08.909302   12832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:08.909465   12832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	W1119 21:47:08.909588   12832 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21918-9296/.minikube/config/config.json: open /home/jenkins/minikube-integration/21918-9296/.minikube/config/config.json: no such file or directory
	I1119 21:47:08.910052   12832 out.go:368] Setting JSON to true
	I1119 21:47:08.910904   12832 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1769,"bootTime":1763587060,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:08.911000   12832 start.go:143] virtualization: kvm guest
	I1119 21:47:08.913020   12832 out.go:99] [download-only-785084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1119 21:47:08.913155   12832 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 21:47:08.913197   12832 notify.go:221] Checking for updates...
	I1119 21:47:08.914591   12832 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:08.915830   12832 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:08.917167   12832 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 21:47:08.918492   12832 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 21:47:08.919582   12832 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 21:47:08.921580   12832 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:47:08.921784   12832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:08.944440   12832 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:47:08.944509   12832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.313851   12832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 21:47:09.303803123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.313973   12832 docker.go:319] overlay module found
	I1119 21:47:09.315543   12832 out.go:99] Using the docker driver based on user configuration
	I1119 21:47:09.315571   12832 start.go:309] selected driver: docker
	I1119 21:47:09.315580   12832 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:09.315651   12832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.378570   12832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 21:47:09.36772189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.378771   12832 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:09.379332   12832 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 21:47:09.379489   12832 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:47:09.381126   12832 out.go:171] Using Docker driver with root privileges
	I1119 21:47:09.382419   12832 cni.go:84] Creating CNI manager for ""
	I1119 21:47:09.382482   12832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 21:47:09.382493   12832 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:09.382557   12832 start.go:353] cluster config:
	{Name:download-only-785084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-785084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:09.383923   12832 out.go:99] Starting "download-only-785084" primary control-plane node in "download-only-785084" cluster
	I1119 21:47:09.383953   12832 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 21:47:09.385166   12832 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:09.385206   12832 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 21:47:09.385304   12832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:09.403518   12832 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:09.403766   12832 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:09.403911   12832 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:09.482622   12832 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 21:47:09.482651   12832 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:09.482840   12832 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 21:47:09.484660   12832 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 21:47:09.484685   12832 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1119 21:47:09.586944   12832 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1119 21:47:09.587076   12832 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 21:47:18.829752   12832 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	
	
	* The control-plane node download-only-785084 host does not exist
	  To start a cluster, run: "minikube start -p download-only-785084"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-785084
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753270 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753270 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.246085391s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 21:47:37.047202   12821 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 21:47:37.047255   12821 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753270
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753270: exit status 85 (73.245441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-785084 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-785084 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-785084                                                                                                                                                               │ download-only-785084 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ start   │ -o=json --download-only -p download-only-753270 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-753270 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:25
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:25.852716   13218 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:25.852815   13218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:25.852821   13218 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:25.852827   13218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:25.853065   13218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 21:47:25.853561   13218 out.go:368] Setting JSON to true
	I1119 21:47:25.854356   13218 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1786,"bootTime":1763587060,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:25.854448   13218 start.go:143] virtualization: kvm guest
	I1119 21:47:25.856436   13218 out.go:99] [download-only-753270] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:25.856593   13218 notify.go:221] Checking for updates...
	I1119 21:47:25.857860   13218 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:25.859140   13218 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:25.860533   13218 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 21:47:25.862053   13218 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 21:47:25.863542   13218 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 21:47:25.866002   13218 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:47:25.866244   13218 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:25.889644   13218 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:47:25.889721   13218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:25.953278   13218 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:25.943611362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:25.953383   13218 docker.go:319] overlay module found
	I1119 21:47:25.955129   13218 out.go:99] Using the docker driver based on user configuration
	I1119 21:47:25.955163   13218 start.go:309] selected driver: docker
	I1119 21:47:25.955178   13218 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:25.955251   13218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:26.014600   13218 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:26.005344497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:26.014747   13218 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:26.015263   13218 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 21:47:26.015405   13218 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:47:26.017116   13218 out.go:171] Using Docker driver with root privileges
	I1119 21:47:26.018140   13218 cni.go:84] Creating CNI manager for ""
	I1119 21:47:26.018190   13218 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 21:47:26.018199   13218 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:26.018251   13218 start.go:353] cluster config:
	{Name:download-only-753270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-753270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:26.019494   13218 out.go:99] Starting "download-only-753270" primary control-plane node in "download-only-753270" cluster
	I1119 21:47:26.019506   13218 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 21:47:26.020682   13218 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:26.020720   13218 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 21:47:26.020852   13218 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:26.038182   13218 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:26.038311   13218 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:26.038333   13218 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:47:26.038339   13218 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:47:26.038346   13218 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:47:26.191812   13218 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 21:47:26.191850   13218 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:26.192201   13218 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 21:47:26.194590   13218 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1119 21:47:26.194614   13218 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1119 21:47:26.296020   13218 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1119 21:47:26.296074   13218 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21918-9296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-753270 host does not exist
	  To start a cluster, run: "minikube start -p download-only-753270"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-753270
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-072375 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-072375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-072375
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 21:47:38.219868   12821 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-294957 --alsologtostderr --binary-mirror http://127.0.0.1:37179 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-294957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-294957
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (50.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-261676 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-261676 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.713333472s)
helpers_test.go:175: Cleaning up "offline-containerd-261676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-261676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-261676: (4.127863429s)
--- PASS: TestOffline (50.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-130311
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-130311: exit status 85 (66.602675ms)

                                                
                                                
-- stdout --
	* Profile "addons-130311" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130311"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-130311
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-130311: exit status 85 (66.199542ms)

                                                
                                                
-- stdout --
	* Profile "addons-130311" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130311"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (125.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-130311 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-130311 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m5.830084607s)
--- PASS: TestAddons/Setup (125.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 15.657817ms
addons_test.go:884: volcano-controller stabilized in 15.687881ms
addons_test.go:868: volcano-scheduler stabilized in 15.737664ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-qzklh" [984632d6-69f4-4b8e-9cf8-225934f6fa0d] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003294088s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-5pvg4" [f44193e6-c69d-4e3f-a412-1793b844f5e7] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00344474s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-q76r2" [e5dea97b-865d-4deb-b102-78aa38c527a6] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003450755s
addons_test.go:903: (dbg) Run:  kubectl --context addons-130311 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-130311 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-130311 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [81e1ed9a-97cf-4558-abe1-30ee559eaf50] Pending
helpers_test.go:352: "test-job-nginx-0" [81e1ed9a-97cf-4558-abe1-30ee559eaf50] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [81e1ed9a-97cf-4558-abe1-30ee559eaf50] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00322265s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable volcano --alsologtostderr -v=1: (11.870815729s)
--- PASS: TestAddons/serial/Volcano (40.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-130311 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-130311 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-130311 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-130311 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b559a617-bfb6-4ed7-a2ec-646549e74d7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b559a617-bfb6-4ed7-a2ec-646549e74d7d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.002780546s
addons_test.go:694: (dbg) Run:  kubectl --context addons-130311 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-130311 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-130311 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.185855ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9ln2r" [063d2324-9cff-4e02-80c4-dc79751d48b6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002647769s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-dxwdr" [00c7a9f4-d4c7-4909-83cd-9c743d380335] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003910434s
addons_test.go:392: (dbg) Run:  kubectl --context addons-130311 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-130311 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-130311 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.622999469s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 ip
2025/11/19 21:50:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.49s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.41678ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-130311
addons_test.go:332: (dbg) Run:  kubectl --context addons-130311 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-130311 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-130311 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-130311 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7dc0f0e4-d9df-48bd-bd20-52fb2c532d51] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7dc0f0e4-d9df-48bd-bd20-52fb2c532d51] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003182781s
I1119 21:51:05.750362   12821 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-130311 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable ingress --alsologtostderr -v=1: (7.727523077s)
--- PASS: TestAddons/parallel/Ingress (20.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fjr9t" [c36cecce-f291-4f88-a086-dea99d6057b8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003826325s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable inspektor-gadget --alsologtostderr -v=1: (5.650232417s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.982336ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ln2xt" [a9303d64-8d62-483b-85e3-470a8b8ee8da] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002751011s
addons_test.go:463: (dbg) Run:  kubectl --context addons-130311 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (30.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 21:50:50.297292   12821 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 21:50:50.300318   12821 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 21:50:50.300338   12821 kapi.go:107] duration metric: took 3.074141ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.082524ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-130311 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-130311 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [061105be-68ae-47e3-af8b-99d0d93d8803] Pending
helpers_test.go:352: "task-pv-pod" [061105be-68ae-47e3-af8b-99d0d93d8803] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [061105be-68ae-47e3-af8b-99d0d93d8803] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003931652s
addons_test.go:572: (dbg) Run:  kubectl --context addons-130311 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-130311 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-130311 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-130311 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-130311 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-130311 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-130311 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f77fbf8c-5a74-4a59-96b4-52c0bc9a90eb] Pending
helpers_test.go:352: "task-pv-pod-restore" [f77fbf8c-5a74-4a59-96b4-52c0bc9a90eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f77fbf8c-5a74-4a59-96b4-52c0bc9a90eb] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003102154s
addons_test.go:614: (dbg) Run:  kubectl --context addons-130311 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-130311 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-130311 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.571092854s)
--- PASS: TestAddons/parallel/CSI (30.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-130311 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-zc89v" [b4917545-079b-438b-8560-62ee3f20af96] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-zc89v" [b4917545-079b-438b-8560-62ee3f20af96] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003367702s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable headlamp --alsologtostderr -v=1: (5.658466444s)
--- PASS: TestAddons/parallel/Headlamp (17.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-fnq2d" [292f972c-3bb1-49b2-abdd-ac8c778bac32] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003148276s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-130311 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-130311 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [64ed1604-c968-40c5-ac67-69495bc22a94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [64ed1604-c968-40c5-ac67-69495bc22a94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [64ed1604-c968-40c5-ac67-69495bc22a94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003270202s
addons_test.go:967: (dbg) Run:  kubectl --context addons-130311 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 ssh "cat /opt/local-path-provisioner/pvc-004be200-bcf5-4b5e-ae7a-084495807fcb_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-130311 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-130311 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.721528559s)
--- PASS: TestAddons/parallel/LocalPath (55.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-zn4mx" [a6cdaeef-9f90-492b-836f-d3a6f754d1ba] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003718655s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cb4dg" [edb42408-615c-4474-ad4c-15ee06842901] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003889402s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-130311 addons disable yakd --alsologtostderr -v=1: (5.685676242s)
--- PASS: TestAddons/parallel/Yakd (10.69s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-mfx8s" [dfa844f2-e22b-47ad-b8e7-c4f7f796838f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003810767s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-130311 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-130311
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-130311: (12.021635565s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-130311
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-130311
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-130311
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (24.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-071115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.838989693s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-071115 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-071115 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-071115 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-071115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-071115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-071115: (2.40916364s)
--- PASS: TestCertOptions (24.96s)

                                                
                                    
x
+
TestCertExpiration (210.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-207460 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-207460 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (21.314102822s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-207460 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.638520436s)
helpers_test.go:175: Cleaning up "cert-expiration-207460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-207460
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-207460: (2.56899376s)
--- PASS: TestCertExpiration (210.52s)

                                                
                                    
x
+
TestForceSystemdFlag (24.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-635885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.171327249s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-635885 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-635885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-635885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-635885: (2.043965794s)
--- PASS: TestForceSystemdFlag (24.51s)

                                                
                                    
x
+
TestForceSystemdEnv (30.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-059219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-059219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.436431096s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-059219 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-059219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-059219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-059219: (2.558515904s)
--- PASS: TestForceSystemdEnv (30.32s)

                                                
                                    
x
+
TestDockerEnvContainerd (38.02s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-245371 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-245371 --driver=docker  --container-runtime=containerd: (21.876721152s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-245371"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXrbhGkk/agent.36886" SSH_AGENT_PID="36887" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXrbhGkk/agent.36886" SSH_AGENT_PID="36887" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXrbhGkk/agent.36886" SSH_AGENT_PID="36887" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.92824716s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXrbhGkk/agent.36886" SSH_AGENT_PID="36887" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-245371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-245371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-245371: (2.3075374s)
--- PASS: TestDockerEnvContainerd (38.02s)

                                                
                                    
x
+
TestErrorSpam/setup (20.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-791870 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-791870 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-791870 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-791870 --driver=docker  --container-runtime=containerd: (20.09496291s)
--- PASS: TestErrorSpam/setup (20.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 stop: (1.298475727s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791870 --log_dir /tmp/nospam-791870 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21918-9296/.minikube/files/etc/test/nested/copy/12821/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-142762 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (41.610230268s)
--- PASS: TestFunctional/serial/StartWithProxy (41.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 21:54:10.636137   12821 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-142762 --alsologtostderr -v=8: (5.825434185s)
functional_test.go:678: soft start took 5.826415968s for "functional-142762" cluster.
I1119 21:54:16.462140   12821 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-142762 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-142762 /tmp/TestFunctionalserialCacheCmdcacheadd_local319134859/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache add minikube-local-cache-test:functional-142762
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-142762 cache add minikube-local-cache-test:functional-142762: (1.554312803s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache delete minikube-local-cache-test:functional-142762
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-142762
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.509513ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 kubectl -- --context functional-142762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-142762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1119 21:54:44.954198   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:44.960655   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:44.972098   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:44.993521   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:45.034966   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:45.116445   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:45.277975   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:45.599715   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:46.241665   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:47.523272   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:50.086190   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:55.207764   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:55:05.449704   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-142762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.038025222s)
functional_test.go:776: restart took 44.038152739s for "functional-142762" cluster.
I1119 21:55:07.608081   12821 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-142762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-142762 logs: (1.222707045s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 logs --file /tmp/TestFunctionalserialLogsFileCmd2179610344/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-142762 logs --file /tmp/TestFunctionalserialLogsFileCmd2179610344/001/logs.txt: (1.256258739s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-142762 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-142762
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-142762: exit status 115 (346.291091ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31617 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-142762 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 config get cpus: exit status 14 (90.245426ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 config get cpus: exit status 14 (89.514233ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-142762 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-142762 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 59599: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-142762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.049757ms)

                                                
                                                
-- stdout --
	* [functional-142762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:44.119243   58612 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:44.119526   58612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:44.119536   58612 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:44.119541   58612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:44.119793   58612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 21:55:44.120318   58612 out.go:368] Setting JSON to false
	I1119 21:55:44.121549   58612 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2284,"bootTime":1763587060,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:55:44.121645   58612 start.go:143] virtualization: kvm guest
	I1119 21:55:44.123591   58612 out.go:179] * [functional-142762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:55:44.125288   58612 notify.go:221] Checking for updates...
	I1119 21:55:44.125292   58612 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:55:44.127247   58612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:55:44.129206   58612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 21:55:44.130581   58612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 21:55:44.131922   58612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:55:44.133177   58612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:55:44.134748   58612 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 21:55:44.135172   58612 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:55:44.167635   58612 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:55:44.167724   58612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:44.254240   58612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:44.241145946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:44.254986   58612 docker.go:319] overlay module found
	I1119 21:55:44.257028   58612 out.go:179] * Using the docker driver based on existing profile
	I1119 21:55:44.258469   58612 start.go:309] selected driver: docker
	I1119 21:55:44.258491   58612 start.go:930] validating driver "docker" against &{Name:functional-142762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-142762 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:44.258627   58612 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:55:44.260626   58612 out.go:203] 
	W1119 21:55:44.261876   58612 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 21:55:44.263053   58612 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-142762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-142762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (165.430907ms)

                                                
                                                
-- stdout --
	* [functional-142762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:36.269621   56497 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:36.269806   56497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:36.269815   56497 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:36.269822   56497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:36.270617   56497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 21:55:36.271155   56497 out.go:368] Setting JSON to false
	I1119 21:55:36.272177   56497 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2276,"bootTime":1763587060,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:55:36.272261   56497 start.go:143] virtualization: kvm guest
	I1119 21:55:36.274306   56497 out.go:179] * [functional-142762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1119 21:55:36.275472   56497 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:55:36.275475   56497 notify.go:221] Checking for updates...
	I1119 21:55:36.276621   56497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:55:36.278244   56497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 21:55:36.279521   56497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 21:55:36.280684   56497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:55:36.281954   56497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:55:36.283621   56497 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 21:55:36.284145   56497 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:55:36.307875   56497 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:55:36.307978   56497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:36.366489   56497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:36.356956363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:36.366595   56497 docker.go:319] overlay module found
	I1119 21:55:36.369243   56497 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 21:55:36.370537   56497 start.go:309] selected driver: docker
	I1119 21:55:36.370554   56497 start.go:930] validating driver "docker" against &{Name:functional-142762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-142762 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:36.370638   56497 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:55:36.372509   56497 out.go:203] 
	W1119 21:55:36.373805   56497 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 21:55:36.375038   56497 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-142762 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-142762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nlrvx" [626e24aa-d28c-479a-b273-fc5345e87183] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-nlrvx" [626e24aa-d28c-479a-b273-fc5345e87183] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.003133175s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30991
functional_test.go:1680: http://192.168.49.2:30991: success! body:
Request served by hello-node-connect-7d85dfc575-nlrvx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30991
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 addons list -o json
I1119 21:55:23.306632   12821 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4969fb10-4404-47dd-8fce-86bea94a5111] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003747908s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-142762 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-142762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-142762 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-142762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [29df821e-f83a-48e7-9c8b-ca4866f29c3c] Pending
helpers_test.go:352: "sp-pod" [29df821e-f83a-48e7-9c8b-ca4866f29c3c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1119 21:55:25.931323   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [29df821e-f83a-48e7-9c8b-ca4866f29c3c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004071515s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-142762 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-142762 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-142762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0e0b98ca-08a4-41dc-945a-edb43f4c5e1d] Pending
helpers_test.go:352: "sp-pod" [0e0b98ca-08a4-41dc-945a-edb43f4c5e1d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0e0b98ca-08a4-41dc-945a-edb43f4c5e1d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004193521s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-142762 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh -n functional-142762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cp functional-142762:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1484291741/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh -n functional-142762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh -n functional-142762 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-142762 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-szkn2" [1fb8af94-8596-4972-8483-74d6ae27b80d] Pending
helpers_test.go:352: "mysql-5bb876957f-szkn2" [1fb8af94-8596-4972-8483-74d6ae27b80d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-szkn2" [1fb8af94-8596-4972-8483-74d6ae27b80d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003452367s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-142762 exec mysql-5bb876957f-szkn2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-142762 exec mysql-5bb876957f-szkn2 -- mysql -ppassword -e "show databases;": exit status 1 (120.492748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 21:55:33.294310   12821 retry.go:31] will retry after 771.700604ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-142762 exec mysql-5bb876957f-szkn2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-142762 exec mysql-5bb876957f-szkn2 -- mysql -ppassword -e "show databases;": exit status 1 (102.904802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 21:55:34.169403   12821 retry.go:31] will retry after 1.929559846s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-142762 exec mysql-5bb876957f-szkn2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12821/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /etc/test/nested/copy/12821/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12821.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /etc/ssl/certs/12821.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12821.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /usr/share/ca-certificates/12821.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/128212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /etc/ssl/certs/128212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/128212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /usr/share/ca-certificates/128212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-142762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "sudo systemctl is-active docker": exit status 1 (306.659337ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "sudo systemctl is-active crio": exit status 1 (298.978437ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-142762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-142762
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-142762
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-142762 image ls --format short --alsologtostderr:
I1119 21:55:47.897111   61185 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:47.897342   61185 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:47.897350   61185 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:47.897354   61185 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:47.897530   61185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
I1119 21:55:47.898096   61185 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:47.898191   61185 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:47.898554   61185 cli_runner.go:164] Run: docker container inspect functional-142762 --format={{.State.Status}}
I1119 21:55:47.922487   61185 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:47.922542   61185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142762
I1119 21:55:47.944449   61185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/functional-142762/id_rsa Username:docker}
I1119 21:55:48.046683   61185 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-142762 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-142762  │ sha256:c2e0f8 │ 993B   │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kicbase/echo-server               │ functional-142762  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-142762 image ls --format table --alsologtostderr:
I1119 21:55:48.299088   61398 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:48.299326   61398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.299333   61398 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:48.299337   61398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.299526   61398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
I1119 21:55:48.300079   61398 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.300178   61398 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.300545   61398 cli_runner.go:164] Run: docker container inspect functional-142762 --format={{.State.Status}}
I1119 21:55:48.323006   61398 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:48.323074   61398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142762
I1119 21:55:48.347635   61398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/functional-142762/id_rsa Username:docker}
I1119 21:55:48.441975   61398 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-142762 image ls --format json --alsologtostderr:
[{"id":"sha256:c2e0f8132170131bb936f8ead67ab319f0c126c15c79f6980c11162ee9a6e58c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-142762"],"size":"993"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:fc25172553d791
97ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-142762","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library
/nginx:alpine"],"size":"22631814"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"173
85568"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020
289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-142762 image ls --format json --alsologtostderr:
I1119 21:55:48.155243   61340 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:48.155377   61340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.155389   61340 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:48.155397   61340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.155738   61340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
I1119 21:55:48.156555   61340 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.156670   61340 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.157291   61340 cli_runner.go:164] Run: docker container inspect functional-142762 --format={{.State.Status}}
I1119 21:55:48.179645   61340 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:48.179690   61340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142762
I1119 21:55:48.202835   61340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/functional-142762/id_rsa Username:docker}
I1119 21:55:48.304626   61340 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-142762 image ls --format yaml --alsologtostderr:
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:c2e0f8132170131bb936f8ead67ab319f0c126c15c79f6980c11162ee9a6e58c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-142762
size: "993"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-142762
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-142762 image ls --format yaml --alsologtostderr:
I1119 21:55:48.398521   61484 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:48.398631   61484 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.398640   61484 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:48.398644   61484 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.398830   61484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
I1119 21:55:48.399365   61484 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.399470   61484 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.399820   61484 cli_runner.go:164] Run: docker container inspect functional-142762 --format={{.State.Status}}
I1119 21:55:48.418453   61484 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:48.418512   61484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142762
I1119 21:55:48.435835   61484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/functional-142762/id_rsa Username:docker}
I1119 21:55:48.530314   61484 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh pgrep buildkitd: exit status 1 (277.636701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image build -t localhost/my-image:functional-142762 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-142762 image build -t localhost/my-image:functional-142762 testdata/build --alsologtostderr: (3.211305355s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-142762 image build -t localhost/my-image:functional-142762 testdata/build --alsologtostderr:
I1119 21:55:48.809795   61663 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:48.809996   61663 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.810009   61663 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:48.810016   61663 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:48.810270   61663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
I1119 21:55:48.810914   61663 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.812199   61663 config.go:182] Loaded profile config "functional-142762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:55:48.812681   61663 cli_runner.go:164] Run: docker container inspect functional-142762 --format={{.State.Status}}
I1119 21:55:48.831550   61663 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:48.831596   61663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142762
I1119 21:55:48.851748   61663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/functional-142762/id_rsa Username:docker}
I1119 21:55:48.944491   61663 build_images.go:162] Building image from path: /tmp/build.2356999514.tar
I1119 21:55:48.944574   61663 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 21:55:48.952498   61663 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2356999514.tar
I1119 21:55:48.956362   61663 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2356999514.tar: stat -c "%s %y" /var/lib/minikube/build/build.2356999514.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2356999514.tar': No such file or directory
I1119 21:55:48.956406   61663 ssh_runner.go:362] scp /tmp/build.2356999514.tar --> /var/lib/minikube/build/build.2356999514.tar (3072 bytes)
I1119 21:55:48.974504   61663 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2356999514
I1119 21:55:48.982732   61663 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2356999514 -xf /var/lib/minikube/build/build.2356999514.tar
I1119 21:55:48.991265   61663 containerd.go:394] Building image: /var/lib/minikube/build/build.2356999514
I1119 21:55:48.991336   61663 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2356999514 --local dockerfile=/var/lib/minikube/build/build.2356999514 --output type=image,name=localhost/my-image:functional-142762
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1f7e5efb1df6ab0b406f229ea9930816ea8447303d9a1f8e71f00c5b80ea9ad2 done
#8 exporting config sha256:a1aa0484441322848e2b8b79bf953eb017bc226f23fa3c942c23d1fcdcc53cee done
#8 naming to localhost/my-image:functional-142762 done
#8 DONE 0.1s
I1119 21:55:51.937999   61663 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2356999514 --local dockerfile=/var/lib/minikube/build/build.2356999514 --output type=image,name=localhost/my-image:functional-142762: (2.946595178s)
I1119 21:55:51.938079   61663 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2356999514
I1119 21:55:51.947538   61663 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2356999514.tar
I1119 21:55:51.956864   61663 build_images.go:218] Built localhost/my-image:functional-142762 from /tmp/build.2356999514.tar
I1119 21:55:51.956927   61663 build_images.go:134] succeeded building to: functional-142762
I1119 21:55:51.956933   61663 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
2025/11/19 21:55:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.704548525s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-142762
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image load --daemon kicbase/echo-server:functional-142762 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-142762 image load --daemon kicbase/echo-server:functional-142762 --alsologtostderr: (1.053849498s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "414.173764ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.830625ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 54407: os: process already finished
helpers_test.go:519: unable to terminate pid 54119: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "389.254229ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.776359ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-142762 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4f31f361-d962-46f9-a575-29f7625d447a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4f31f361-d962-46f9-a575-29f7625d447a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.003789957s
I1119 21:55:35.834875   12821 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image load --daemon kicbase/echo-server:functional-142762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-142762
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image load --daemon kicbase/echo-server:functional-142762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image save kicbase/echo-server:functional-142762 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image rm kicbase/echo-server:functional-142762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-142762
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 image save --daemon kicbase/echo-server:functional-142762 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-142762
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-142762 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.26.27 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-142762 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-142762 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-142762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8c9f7" [bc8d2837-b8ed-491b-a639-5364cc365103] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-8c9f7" [bc8d2837-b8ed-491b-a639-5364cc365103] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003226771s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdany-port877637866/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763589336381436493" to /tmp/TestFunctionalparallelMountCmdany-port877637866/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763589336381436493" to /tmp/TestFunctionalparallelMountCmdany-port877637866/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763589336381436493" to /tmp/TestFunctionalparallelMountCmdany-port877637866/001/test-1763589336381436493
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.736124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:36.677492   12821 retry.go:31] will retry after 443.876632ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 test-1763589336381436493
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh cat /mount-9p/test-1763589336381436493
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-142762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [67d51ecf-b5ec-42ef-bd71-325752ef2611] Pending
helpers_test.go:352: "busybox-mount" [67d51ecf-b5ec-42ef-bd71-325752ef2611] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [67d51ecf-b5ec-42ef-bd71-325752ef2611] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
I1119 21:55:42.119866   12821 detect.go:223] nested VM detected
helpers_test.go:352: "busybox-mount" [67d51ecf-b5ec-42ef-bd71-325752ef2611] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003263072s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-142762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdany-port877637866/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service list -o json
functional_test.go:1504: Took "987.536996ms" to run "out/minikube-linux-amd64 -p functional-142762 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdspecific-port1043741615/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.9213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:44.493270   12821 retry.go:31] will retry after 669.368448ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdspecific-port1043741615/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "sudo umount -f /mount-9p": exit status 1 (291.390688ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-142762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdspecific-port1043741615/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31512
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31512
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T" /mount1: exit status 1 (372.48136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:46.679389   12821 retry.go:31] will retry after 574.050414ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-142762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-142762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-142762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2602629887/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-142762
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-142762
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-142762
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (140.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1119 21:56:06.893033   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:28.815384   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m20.2574954s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (140.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 kubectl -- rollout status deployment/busybox: (3.289639673s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-fpklf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zbh4v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zd4kd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-fpklf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zbh4v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zd4kd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-fpklf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zbh4v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zd4kd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-fpklf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-fpklf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zbh4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zbh4v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zd4kd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 kubectl -- exec busybox-7b57f96db7-zd4kd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 node add --alsologtostderr -v 5: (23.647490485s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-901994 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp testdata/cp-test.txt ha-901994:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2794060133/001/cp-test_ha-901994.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994:/home/docker/cp-test.txt ha-901994-m02:/home/docker/cp-test_ha-901994_ha-901994-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test_ha-901994_ha-901994-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994:/home/docker/cp-test.txt ha-901994-m03:/home/docker/cp-test_ha-901994_ha-901994-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test_ha-901994_ha-901994-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994:/home/docker/cp-test.txt ha-901994-m04:/home/docker/cp-test_ha-901994_ha-901994-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test_ha-901994_ha-901994-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp testdata/cp-test.txt ha-901994-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2794060133/001/cp-test_ha-901994-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m02:/home/docker/cp-test.txt ha-901994:/home/docker/cp-test_ha-901994-m02_ha-901994.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test_ha-901994-m02_ha-901994.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m02:/home/docker/cp-test.txt ha-901994-m03:/home/docker/cp-test_ha-901994-m02_ha-901994-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test_ha-901994-m02_ha-901994-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m02:/home/docker/cp-test.txt ha-901994-m04:/home/docker/cp-test_ha-901994-m02_ha-901994-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test_ha-901994-m02_ha-901994-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp testdata/cp-test.txt ha-901994-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2794060133/001/cp-test_ha-901994-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m03:/home/docker/cp-test.txt ha-901994:/home/docker/cp-test_ha-901994-m03_ha-901994.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test_ha-901994-m03_ha-901994.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m03:/home/docker/cp-test.txt ha-901994-m02:/home/docker/cp-test_ha-901994-m03_ha-901994-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test_ha-901994-m03_ha-901994-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m03:/home/docker/cp-test.txt ha-901994-m04:/home/docker/cp-test_ha-901994-m03_ha-901994-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test_ha-901994-m03_ha-901994-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp testdata/cp-test.txt ha-901994-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2794060133/001/cp-test_ha-901994-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m04:/home/docker/cp-test.txt ha-901994:/home/docker/cp-test_ha-901994-m04_ha-901994.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994 "sudo cat /home/docker/cp-test_ha-901994-m04_ha-901994.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m04:/home/docker/cp-test.txt ha-901994-m02:/home/docker/cp-test_ha-901994-m04_ha-901994-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m02 "sudo cat /home/docker/cp-test_ha-901994-m04_ha-901994-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 cp ha-901994-m04:/home/docker/cp-test.txt ha-901994-m03:/home/docker/cp-test_ha-901994-m04_ha-901994-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 ssh -n ha-901994-m03 "sudo cat /home/docker/cp-test_ha-901994-m04_ha-901994-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 node stop m02 --alsologtostderr -v 5: (12.04697204s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5: exit status 7 (700.229987ms)

                                                
                                                
-- stdout --
	ha-901994
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901994-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901994-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901994-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:59:20.650460   82820 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:59:20.650764   82820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:59:20.650776   82820 out.go:374] Setting ErrFile to fd 2...
	I1119 21:59:20.650783   82820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:59:20.650997   82820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 21:59:20.651171   82820 out.go:368] Setting JSON to false
	I1119 21:59:20.651215   82820 mustload.go:66] Loading cluster: ha-901994
	I1119 21:59:20.651339   82820 notify.go:221] Checking for updates...
	I1119 21:59:20.651659   82820 config.go:182] Loaded profile config "ha-901994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 21:59:20.651676   82820 status.go:174] checking status of ha-901994 ...
	I1119 21:59:20.652276   82820 cli_runner.go:164] Run: docker container inspect ha-901994 --format={{.State.Status}}
	I1119 21:59:20.672916   82820 status.go:371] ha-901994 host status = "Running" (err=<nil>)
	I1119 21:59:20.672948   82820 host.go:66] Checking if "ha-901994" exists ...
	I1119 21:59:20.673227   82820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901994
	I1119 21:59:20.692841   82820 host.go:66] Checking if "ha-901994" exists ...
	I1119 21:59:20.693119   82820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:59:20.693166   82820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901994
	I1119 21:59:20.710369   82820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/ha-901994/id_rsa Username:docker}
	I1119 21:59:20.803453   82820 ssh_runner.go:195] Run: systemctl --version
	I1119 21:59:20.810342   82820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:59:20.822937   82820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:59:20.884740   82820 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 21:59:20.872805139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:59:20.885457   82820 kubeconfig.go:125] found "ha-901994" server: "https://192.168.49.254:8443"
	I1119 21:59:20.885484   82820 api_server.go:166] Checking apiserver status ...
	I1119 21:59:20.885517   82820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:20.897754   82820 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	W1119 21:59:20.906506   82820 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:59:20.906554   82820 ssh_runner.go:195] Run: ls
	I1119 21:59:20.910571   82820 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 21:59:20.915271   82820 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 21:59:20.915300   82820 status.go:463] ha-901994 apiserver status = Running (err=<nil>)
	I1119 21:59:20.915310   82820 status.go:176] ha-901994 status: &{Name:ha-901994 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 21:59:20.915324   82820 status.go:174] checking status of ha-901994-m02 ...
	I1119 21:59:20.915555   82820 cli_runner.go:164] Run: docker container inspect ha-901994-m02 --format={{.State.Status}}
	I1119 21:59:20.935811   82820 status.go:371] ha-901994-m02 host status = "Stopped" (err=<nil>)
	I1119 21:59:20.935833   82820 status.go:384] host is not running, skipping remaining checks
	I1119 21:59:20.935839   82820 status.go:176] ha-901994-m02 status: &{Name:ha-901994-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 21:59:20.935866   82820 status.go:174] checking status of ha-901994-m03 ...
	I1119 21:59:20.936205   82820 cli_runner.go:164] Run: docker container inspect ha-901994-m03 --format={{.State.Status}}
	I1119 21:59:20.955747   82820 status.go:371] ha-901994-m03 host status = "Running" (err=<nil>)
	I1119 21:59:20.955782   82820 host.go:66] Checking if "ha-901994-m03" exists ...
	I1119 21:59:20.956063   82820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901994-m03
	I1119 21:59:20.974194   82820 host.go:66] Checking if "ha-901994-m03" exists ...
	I1119 21:59:20.974452   82820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:59:20.974487   82820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901994-m03
	I1119 21:59:20.993347   82820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/ha-901994-m03/id_rsa Username:docker}
	I1119 21:59:21.085410   82820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:59:21.098412   82820 kubeconfig.go:125] found "ha-901994" server: "https://192.168.49.254:8443"
	I1119 21:59:21.098438   82820 api_server.go:166] Checking apiserver status ...
	I1119 21:59:21.098474   82820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:21.110723   82820 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1303/cgroup
	W1119 21:59:21.119819   82820 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1303/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:59:21.119903   82820 ssh_runner.go:195] Run: ls
	I1119 21:59:21.124081   82820 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 21:59:21.128588   82820 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 21:59:21.128613   82820 status.go:463] ha-901994-m03 apiserver status = Running (err=<nil>)
	I1119 21:59:21.128624   82820 status.go:176] ha-901994-m03 status: &{Name:ha-901994-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 21:59:21.128642   82820 status.go:174] checking status of ha-901994-m04 ...
	I1119 21:59:21.129042   82820 cli_runner.go:164] Run: docker container inspect ha-901994-m04 --format={{.State.Status}}
	I1119 21:59:21.148907   82820 status.go:371] ha-901994-m04 host status = "Running" (err=<nil>)
	I1119 21:59:21.148936   82820 host.go:66] Checking if "ha-901994-m04" exists ...
	I1119 21:59:21.149188   82820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901994-m04
	I1119 21:59:21.167648   82820 host.go:66] Checking if "ha-901994-m04" exists ...
	I1119 21:59:21.168005   82820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:59:21.168053   82820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901994-m04
	I1119 21:59:21.187032   82820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/ha-901994-m04/id_rsa Username:docker}
	I1119 21:59:21.277069   82820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:59:21.289342   82820 status.go:176] ha-901994-m04 status: &{Name:ha-901994-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 node start m02 --alsologtostderr -v 5: (7.77423696s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (93.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 stop --alsologtostderr -v 5
E1119 21:59:44.954064   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 stop --alsologtostderr -v 5: (37.250166256s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 start --wait true --alsologtostderr -v 5
E1119 22:00:12.657362   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.170413   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.176840   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.188274   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.209697   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.251536   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.333421   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.494839   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:16.816579   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:17.458098   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:18.739713   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:21.301202   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:26.423121   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:36.665144   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:00:57.147443   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 start --wait true --alsologtostderr -v 5: (56.316695528s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (93.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 node delete m03 --alsologtostderr -v 5: (8.642419985s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 stop --alsologtostderr -v 5
E1119 22:01:38.110057   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 stop --alsologtostderr -v 5: (36.13242769s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5: exit status 7 (119.726135ms)

                                                
                                                
-- stdout --
	ha-901994
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901994-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901994-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:01:51.665325   99317 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:01:51.665449   99317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:01:51.665457   99317 out.go:374] Setting ErrFile to fd 2...
	I1119 22:01:51.665463   99317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:01:51.665662   99317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:01:51.665825   99317 out.go:368] Setting JSON to false
	I1119 22:01:51.665855   99317 mustload.go:66] Loading cluster: ha-901994
	I1119 22:01:51.665931   99317 notify.go:221] Checking for updates...
	I1119 22:01:51.666708   99317 config.go:182] Loaded profile config "ha-901994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:01:51.666737   99317 status.go:174] checking status of ha-901994 ...
	I1119 22:01:51.667869   99317 cli_runner.go:164] Run: docker container inspect ha-901994 --format={{.State.Status}}
	I1119 22:01:51.689905   99317 status.go:371] ha-901994 host status = "Stopped" (err=<nil>)
	I1119 22:01:51.689944   99317 status.go:384] host is not running, skipping remaining checks
	I1119 22:01:51.689952   99317 status.go:176] ha-901994 status: &{Name:ha-901994 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:01:51.689984   99317 status.go:174] checking status of ha-901994-m02 ...
	I1119 22:01:51.690351   99317 cli_runner.go:164] Run: docker container inspect ha-901994-m02 --format={{.State.Status}}
	I1119 22:01:51.708605   99317 status.go:371] ha-901994-m02 host status = "Stopped" (err=<nil>)
	I1119 22:01:51.708629   99317 status.go:384] host is not running, skipping remaining checks
	I1119 22:01:51.708635   99317 status.go:176] ha-901994-m02 status: &{Name:ha-901994-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:01:51.708654   99317 status.go:174] checking status of ha-901994-m04 ...
	I1119 22:01:51.708939   99317 cli_runner.go:164] Run: docker container inspect ha-901994-m04 --format={{.State.Status}}
	I1119 22:01:51.727165   99317 status.go:371] ha-901994-m04 host status = "Stopped" (err=<nil>)
	I1119 22:01:51.727210   99317 status.go:384] host is not running, skipping remaining checks
	I1119 22:01:51.727222   99317 status.go:176] ha-901994-m04 status: &{Name:ha-901994-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (57.106974387s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 node add --control-plane --alsologtostderr -v 5
E1119 22:03:00.032191   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-901994 node add --control-plane --alsologtostderr -v 5: (40.955312054s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-901994 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-132179 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-132179 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (39.223895597s)
--- PASS: TestJSONOutput/start/Command (39.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-132179 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-132179 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-132179 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-132179 --output=json --user=testUser: (5.866297674s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-372389 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-372389 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.969895ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c8efb44-1767-4578-a065-e10604299d94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-372389] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e39d19e-3e14-464c-9740-5cd022131de7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"fb9a89cd-5330-49c6-a23c-b004bd7411e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a26cda82-25fb-478a-b3db-16dbab1183f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig"}}
	{"specversion":"1.0","id":"b392cde5-02d1-4a67-b4ee-705ee5165d78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube"}}
	{"specversion":"1.0","id":"bac856d8-6264-43d4-8c81-7fcde9bb1e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"487ae35c-1fd0-47fc-baaf-95e7dcb89843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e380af54-2c1e-42c9-9442-9b2301ed14a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-372389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-372389
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-091741 --network=
E1119 22:04:44.959558   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-091741 --network=: (42.6763154s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-091741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-091741
E1119 22:05:16.170400   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-091741: (2.121454566s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-827290 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-827290 --network=bridge: (20.507167353s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-827290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-827290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-827290: (2.035562548s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.56s)

                                                
                                    
x
+
TestKicExistingNetwork (23.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 22:05:40.218521   12821 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 22:05:40.235882   12821 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 22:05:40.235986   12821 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 22:05:40.236007   12821 cli_runner.go:164] Run: docker network inspect existing-network
W1119 22:05:40.252949   12821 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 22:05:40.252979   12821 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 22:05:40.253004   12821 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 22:05:40.253120   12821 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 22:05:40.270626   12821 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-02d9279961e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f0:7b:99:dd:08} reservation:<nil>}
I1119 22:05:40.271103   12821 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021895a0}
I1119 22:05:40.271135   12821 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 22:05:40.271192   12821 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 22:05:40.319929   12821 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-958616 --network=existing-network
E1119 22:05:43.876284   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-958616 --network=existing-network: (20.953017538s)
helpers_test.go:175: Cleaning up "existing-network-958616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-958616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-958616: (2.011148017s)
I1119 22:06:03.301946   12821 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.10s)

                                                
                                    
x
+
TestKicCustomSubnet (24.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-619508 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-619508 --subnet=192.168.60.0/24: (22.115408557s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-619508 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-619508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-619508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-619508: (2.158282696s)
--- PASS: TestKicCustomSubnet (24.30s)

                                                
                                    
x
+
TestKicStaticIP (26.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-180291 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-180291 --static-ip=192.168.200.200: (24.311100148s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-180291 ip
helpers_test.go:175: Cleaning up "static-ip-180291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-180291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-180291: (2.136896527s)
--- PASS: TestKicStaticIP (26.60s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-465141 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-465141 --driver=docker  --container-runtime=containerd: (24.966969345s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-468781 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-468781 --driver=docker  --container-runtime=containerd: (21.069624334s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-465141
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-468781
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-468781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-468781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-468781: (1.963857253s)
helpers_test.go:175: Cleaning up "first-465141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-465141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-465141: (2.35132325s)
--- PASS: TestMinikubeProfile (51.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-677362 --memory=3072 --mount-string /tmp/TestMountStartserial1529510988/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-677362 --memory=3072 --mount-string /tmp/TestMountStartserial1529510988/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.534382732s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-677362 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-695294 --memory=3072 --mount-string /tmp/TestMountStartserial1529510988/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-695294 --memory=3072 --mount-string /tmp/TestMountStartserial1529510988/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.514418738s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-677362 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-677362 --alsologtostderr -v=5: (1.709607566s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-695294
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-695294: (1.263644892s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-695294
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-695294: (6.864847897s)
--- PASS: TestMountStart/serial/RestartStopped (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841678 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841678 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.382268091s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-841678 -- rollout status deployment/busybox: (3.519651141s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-jcklk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-n7f4b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-jcklk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-n7f4b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-jcklk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-n7f4b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-jcklk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-jcklk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-n7f4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841678 -- exec busybox-7b57f96db7-n7f4b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-841678 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-841678 -v=5 --alsologtostderr: (22.988161991s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-841678 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp testdata/cp-test.txt multinode-841678:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile612462499/001/cp-test_multinode-841678.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678:/home/docker/cp-test.txt multinode-841678-m02:/home/docker/cp-test_multinode-841678_multinode-841678-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test_multinode-841678_multinode-841678-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678:/home/docker/cp-test.txt multinode-841678-m03:/home/docker/cp-test_multinode-841678_multinode-841678-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test.txt"
E1119 22:09:44.954426   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test_multinode-841678_multinode-841678-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp testdata/cp-test.txt multinode-841678-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile612462499/001/cp-test_multinode-841678-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m02:/home/docker/cp-test.txt multinode-841678:/home/docker/cp-test_multinode-841678-m02_multinode-841678.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test_multinode-841678-m02_multinode-841678.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m02:/home/docker/cp-test.txt multinode-841678-m03:/home/docker/cp-test_multinode-841678-m02_multinode-841678-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test_multinode-841678-m02_multinode-841678-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp testdata/cp-test.txt multinode-841678-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile612462499/001/cp-test_multinode-841678-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m03:/home/docker/cp-test.txt multinode-841678:/home/docker/cp-test_multinode-841678-m03_multinode-841678.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678 "sudo cat /home/docker/cp-test_multinode-841678-m03_multinode-841678.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 cp multinode-841678-m03:/home/docker/cp-test.txt multinode-841678-m02:/home/docker/cp-test_multinode-841678-m03_multinode-841678-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 ssh -n multinode-841678-m02 "sudo cat /home/docker/cp-test_multinode-841678-m03_multinode-841678-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-841678 node stop m03: (1.26086639s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841678 status: exit status 7 (510.942068ms)

                                                
                                                
-- stdout --
	multinode-841678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr: exit status 7 (496.857319ms)

                                                
                                                
-- stdout --
	multinode-841678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:09:53.385994  161375 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:09:53.386087  161375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:09:53.386094  161375 out.go:374] Setting ErrFile to fd 2...
	I1119 22:09:53.386099  161375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:09:53.386302  161375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:09:53.386467  161375 out.go:368] Setting JSON to false
	I1119 22:09:53.386498  161375 mustload.go:66] Loading cluster: multinode-841678
	I1119 22:09:53.386610  161375 notify.go:221] Checking for updates...
	I1119 22:09:53.386841  161375 config.go:182] Loaded profile config "multinode-841678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:09:53.386854  161375 status.go:174] checking status of multinode-841678 ...
	I1119 22:09:53.387286  161375 cli_runner.go:164] Run: docker container inspect multinode-841678 --format={{.State.Status}}
	I1119 22:09:53.408058  161375 status.go:371] multinode-841678 host status = "Running" (err=<nil>)
	I1119 22:09:53.408090  161375 host.go:66] Checking if "multinode-841678" exists ...
	I1119 22:09:53.408365  161375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-841678
	I1119 22:09:53.427542  161375 host.go:66] Checking if "multinode-841678" exists ...
	I1119 22:09:53.427803  161375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:09:53.427846  161375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-841678
	I1119 22:09:53.446520  161375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/multinode-841678/id_rsa Username:docker}
	I1119 22:09:53.537470  161375 ssh_runner.go:195] Run: systemctl --version
	I1119 22:09:53.543728  161375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:09:53.556835  161375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:09:53.611629  161375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-19 22:09:53.602156837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:09:53.612218  161375 kubeconfig.go:125] found "multinode-841678" server: "https://192.168.67.2:8443"
	I1119 22:09:53.612246  161375 api_server.go:166] Checking apiserver status ...
	I1119 22:09:53.612280  161375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:09:53.624379  161375 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	W1119 22:09:53.633501  161375 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:09:53.633594  161375 ssh_runner.go:195] Run: ls
	I1119 22:09:53.637559  161375 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 22:09:53.641540  161375 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 22:09:53.641562  161375 status.go:463] multinode-841678 apiserver status = Running (err=<nil>)
	I1119 22:09:53.641572  161375 status.go:176] multinode-841678 status: &{Name:multinode-841678 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:09:53.641593  161375 status.go:174] checking status of multinode-841678-m02 ...
	I1119 22:09:53.641824  161375 cli_runner.go:164] Run: docker container inspect multinode-841678-m02 --format={{.State.Status}}
	I1119 22:09:53.660392  161375 status.go:371] multinode-841678-m02 host status = "Running" (err=<nil>)
	I1119 22:09:53.660424  161375 host.go:66] Checking if "multinode-841678-m02" exists ...
	I1119 22:09:53.660687  161375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-841678-m02
	I1119 22:09:53.679088  161375 host.go:66] Checking if "multinode-841678-m02" exists ...
	I1119 22:09:53.679339  161375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:09:53.679377  161375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-841678-m02
	I1119 22:09:53.698730  161375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21918-9296/.minikube/machines/multinode-841678-m02/id_rsa Username:docker}
	I1119 22:09:53.790205  161375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:09:53.803199  161375 status.go:176] multinode-841678-m02 status: &{Name:multinode-841678-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:09:53.803249  161375 status.go:174] checking status of multinode-841678-m03 ...
	I1119 22:09:53.803519  161375 cli_runner.go:164] Run: docker container inspect multinode-841678-m03 --format={{.State.Status}}
	I1119 22:09:53.822756  161375 status.go:371] multinode-841678-m03 host status = "Stopped" (err=<nil>)
	I1119 22:09:53.822777  161375 status.go:384] host is not running, skipping remaining checks
	I1119 22:09:53.822782  161375 status.go:176] multinode-841678-m03 status: &{Name:multinode-841678-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-841678 node start m03 -v=5 --alsologtostderr: (6.489745119s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841678
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-841678
E1119 22:10:16.170186   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-841678: (25.054286435s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841678 --wait=true -v=5 --alsologtostderr
E1119 22:11:08.019547   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841678 --wait=true -v=5 --alsologtostderr: (46.938836486s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841678
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-841678 node delete m03: (4.66461785s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-841678 stop: (23.901972425s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841678 status: exit status 7 (102.128787ms)

                                                
                                                
-- stdout --
	multinode-841678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-841678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr: exit status 7 (95.488128ms)

                                                
                                                
-- stdout --
	multinode-841678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-841678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:11:42.465090  171056 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:11:42.465329  171056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:11:42.465337  171056 out.go:374] Setting ErrFile to fd 2...
	I1119 22:11:42.465341  171056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:11:42.465532  171056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:11:42.465685  171056 out.go:368] Setting JSON to false
	I1119 22:11:42.465713  171056 mustload.go:66] Loading cluster: multinode-841678
	I1119 22:11:42.465747  171056 notify.go:221] Checking for updates...
	I1119 22:11:42.466157  171056 config.go:182] Loaded profile config "multinode-841678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:11:42.466179  171056 status.go:174] checking status of multinode-841678 ...
	I1119 22:11:42.466658  171056 cli_runner.go:164] Run: docker container inspect multinode-841678 --format={{.State.Status}}
	I1119 22:11:42.484550  171056 status.go:371] multinode-841678 host status = "Stopped" (err=<nil>)
	I1119 22:11:42.484605  171056 status.go:384] host is not running, skipping remaining checks
	I1119 22:11:42.484619  171056 status.go:176] multinode-841678 status: &{Name:multinode-841678 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:11:42.484652  171056 status.go:174] checking status of multinode-841678-m02 ...
	I1119 22:11:42.484919  171056 cli_runner.go:164] Run: docker container inspect multinode-841678-m02 --format={{.State.Status}}
	I1119 22:11:42.503225  171056 status.go:371] multinode-841678-m02 host status = "Stopped" (err=<nil>)
	I1119 22:11:42.503261  171056 status.go:384] host is not running, skipping remaining checks
	I1119 22:11:42.503267  171056 status.go:176] multinode-841678-m02 status: &{Name:multinode-841678-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841678 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841678 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (43.87976906s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841678 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841678
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841678-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-841678-m02 --driver=docker  --container-runtime=containerd: exit status 14 (77.234923ms)

                                                
                                                
-- stdout --
	* [multinode-841678-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-841678-m02' is duplicated with machine name 'multinode-841678-m02' in profile 'multinode-841678'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841678-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841678-m03 --driver=docker  --container-runtime=containerd: (21.175118686s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-841678
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-841678: exit status 80 (299.750012ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-841678 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-841678-m03 already exists in multinode-841678-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-841678-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-841678-m03: (1.976301177s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.59s)

                                                
                                    
x
+
TestPreload (109.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-187150 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-187150 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.181353572s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-187150 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-187150 image pull gcr.io/k8s-minikube/busybox: (2.249310361s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-187150
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-187150: (6.732989335s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-187150 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-187150 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.876876629s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-187150 image list
helpers_test.go:175: Cleaning up "test-preload-187150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-187150
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-187150: (2.431951221s)
--- PASS: TestPreload (109.69s)

                                                
                                    
x
+
TestScheduledStopUnix (97.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-791815 --memory=3072 --driver=docker  --container-runtime=containerd
E1119 22:14:44.953955   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-791815 --memory=3072 --driver=docker  --container-runtime=containerd: (21.156708075s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-791815 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:15:05.729985  189323 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:15:05.730352  189323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:05.730369  189323 out.go:374] Setting ErrFile to fd 2...
	I1119 22:15:05.730375  189323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:05.730672  189323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:15:05.731147  189323 out.go:368] Setting JSON to false
	I1119 22:15:05.731300  189323 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:05.731661  189323 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:15:05.731729  189323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/config.json ...
	I1119 22:15:05.731933  189323 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:05.732038  189323 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-791815 -n scheduled-stop-791815
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-791815 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:15:06.119261  189490 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:15:06.119408  189490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:06.119417  189490 out.go:374] Setting ErrFile to fd 2...
	I1119 22:15:06.119424  189490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:06.119659  189490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:15:06.119935  189490 out.go:368] Setting JSON to false
	I1119 22:15:06.120186  189490 daemonize_unix.go:73] killing process 189358 as it is an old scheduled stop
	I1119 22:15:06.120309  189490 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:06.120768  189490 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:15:06.120871  189490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/config.json ...
	I1119 22:15:06.121136  189490 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:06.121272  189490 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 22:15:06.126393   12821 retry.go:31] will retry after 103.606µs: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.127578   12821 retry.go:31] will retry after 193.778µs: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.128750   12821 retry.go:31] will retry after 322.552µs: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.129993   12821 retry.go:31] will retry after 402.959µs: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.131157   12821 retry.go:31] will retry after 657.554µs: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.132303   12821 retry.go:31] will retry after 1.093874ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.134508   12821 retry.go:31] will retry after 1.097414ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.135676   12821 retry.go:31] will retry after 1.389531ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.137870   12821 retry.go:31] will retry after 1.578059ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.140116   12821 retry.go:31] will retry after 5.567711ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.146452   12821 retry.go:31] will retry after 3.627018ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.150728   12821 retry.go:31] will retry after 4.918556ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.155982   12821 retry.go:31] will retry after 17.04934ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.173580   12821 retry.go:31] will retry after 15.19845ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.189943   12821 retry.go:31] will retry after 29.921887ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
I1119 22:15:06.220256   12821 retry.go:31] will retry after 41.649196ms: open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-791815 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1119 22:15:16.170484   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-791815 -n scheduled-stop-791815
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-791815
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-791815 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:15:32.025277  190379 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:15:32.025540  190379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:32.025548  190379 out.go:374] Setting ErrFile to fd 2...
	I1119 22:15:32.025552  190379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:15:32.025813  190379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:15:32.026170  190379 out.go:368] Setting JSON to false
	I1119 22:15:32.026284  190379 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:32.026764  190379 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:15:32.026857  190379 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/scheduled-stop-791815/config.json ...
	I1119 22:15:32.027074  190379 mustload.go:66] Loading cluster: scheduled-stop-791815
	I1119 22:15:32.027175  190379 config.go:182] Loaded profile config "scheduled-stop-791815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-791815
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-791815: exit status 7 (81.152138ms)

                                                
                                                
-- stdout --
	scheduled-stop-791815
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-791815 -n scheduled-stop-791815
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-791815 -n scheduled-stop-791815: exit status 7 (83.815382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-791815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-791815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-791815: (5.044495607s)
--- PASS: TestScheduledStopUnix (97.73s)

                                                
                                    
x
+
TestInsufficientStorage (9.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-169458 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-169458 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.21718801s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35003389-497e-4342-ac31-f0ca3215393b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-169458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f768287a-4d77-47a3-9252-12368de8cc21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"099f20b8-9d1f-4741-a260-41f125fc0940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"572b0cb3-96c7-408b-bcf5-0c19773e00c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig"}}
	{"specversion":"1.0","id":"265fa23c-9cd8-4ad9-9ba5-049e6ffd87b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube"}}
	{"specversion":"1.0","id":"2af8e816-d71d-4305-9e5b-53f06a939d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"49cb4abb-d00f-4156-8e28-c5e6452017a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b9ff70c-aa16-4efb-bbb2-1a721fda8651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eae8337d-2cd1-4090-bb9f-619d38063558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9b2db151-178a-4a27-a560-69eb95c95096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1eec975-0b94-4e19-a0d2-ae0e37321389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dd37dfb7-b8c8-4b09-bdfb-54d090504cd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-169458\" primary control-plane node in \"insufficient-storage-169458\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb5bd8d9-b98a-40f7-bfff-cba072314d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763561786-21918 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a99d103-847a-4896-bafc-aab93f79d820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4293a0a2-5f0b-42d4-ae2b-c08c174cdede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-169458 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-169458 --output=json --layout=cluster: exit status 7 (294.768045ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-169458","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-169458","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:16:29.740136  192650 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-169458" does not appear in /home/jenkins/minikube-integration/21918-9296/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-169458 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-169458 --output=json --layout=cluster: exit status 7 (289.883086ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-169458","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-169458","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:16:30.029841  192760 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-169458" does not appear in /home/jenkins/minikube-integration/21918-9296/kubeconfig
	E1119 22:16:30.040947  192760 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/insufficient-storage-169458/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-169458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-169458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-169458: (1.894104549s)
--- PASS: TestInsufficientStorage (9.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (103.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.610864953 start -p running-upgrade-297552 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1119 22:16:39.238156   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.610864953 start -p running-upgrade-297552 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m15.356411648s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-297552 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-297552 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.434590844s)
helpers_test.go:175: Cleaning up "running-upgrade-297552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-297552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-297552: (4.052587586s)
--- PASS: TestRunningBinaryUpgrade (103.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (331.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.072085132s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-133839
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-133839: (1.298555413s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-133839 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-133839 status --format={{.Host}}: exit status 7 (77.374315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.11706957s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-133839 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (99.814585ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133839] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-133839
	    minikube start -p kubernetes-upgrade-133839 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1338392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-133839 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-133839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.142072391s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-133839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-133839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-133839: (2.623706483s)
--- PASS: TestKubernetesUpgrade (331.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2398454476 start -p missing-upgrade-755266 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2398454476 start -p missing-upgrade-755266 --memory=3072 --driver=docker  --container-runtime=containerd: (29.321971457s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-755266
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-755266
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-755266 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-755266 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.055478545s)
helpers_test.go:175: Cleaning up "missing-upgrade-755266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-755266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-755266: (2.081321759s)
--- PASS: TestMissingContainerUpgrade (90.78s)

                                                
                                    
x
+
TestPause/serial/Start (50.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-278761 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-278761 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.155594951s)
--- PASS: TestPause/serial/Start (50.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.832237869 start -p stopped-upgrade-285202 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.832237869 start -p stopped-upgrade-285202 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m15.950435179s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.832237869 -p stopped-upgrade-285202 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.832237869 -p stopped-upgrade-285202 stop: (1.309015902s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-285202 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-285202 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.137600047s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-278761 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-278761 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.137747256s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-278761 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-278761 --alsologtostderr -v=5: (1.819037232s)
--- PASS: TestPause/serial/Pause (1.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-278761 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-278761 --output=json --layout=cluster: exit status 2 (391.179824ms)

                                                
                                                
-- stdout --
	{"Name":"pause-278761","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-278761","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-278761 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-278761 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-278761 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-278761 --alsologtostderr -v=5: (4.839215372s)
--- PASS: TestPause/serial/DeletePaused (4.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-278761
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-278761: exit status 1 (18.839556ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-278761: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-285202
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-285202: (4.523752185s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (90.565349ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-836292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836292 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836292 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.72220708s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836292 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.495059446s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836292 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-836292 status -o json: exit status 2 (346.294191ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-836292","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-836292
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-836292: (3.878872594s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-904997 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-904997 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (187.361001ms)

                                                
                                                
-- stdout --
	* [false-904997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:18:50.180781  229254 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:18:50.181092  229254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:18:50.181105  229254 out.go:374] Setting ErrFile to fd 2...
	I1119 22:18:50.181111  229254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:18:50.181328  229254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9296/.minikube/bin
	I1119 22:18:50.182405  229254 out.go:368] Setting JSON to false
	I1119 22:18:50.183551  229254 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3670,"bootTime":1763587060,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:18:50.183646  229254 start.go:143] virtualization: kvm guest
	I1119 22:18:50.187004  229254 out.go:179] * [false-904997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:18:50.188437  229254 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:18:50.188468  229254 notify.go:221] Checking for updates...
	I1119 22:18:50.190914  229254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:18:50.192280  229254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9296/kubeconfig
	I1119 22:18:50.193603  229254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9296/.minikube
	I1119 22:18:50.195255  229254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:18:50.196443  229254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:18:50.198404  229254 config.go:182] Loaded profile config "NoKubernetes-836292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1119 22:18:50.198557  229254 config.go:182] Loaded profile config "kubernetes-upgrade-133839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:18:50.198677  229254 config.go:182] Loaded profile config "missing-upgrade-755266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1119 22:18:50.198806  229254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:18:50.227667  229254 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:18:50.227759  229254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:18:50.295634  229254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:18:50.284429349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:18:50.295820  229254 docker.go:319] overlay module found
	I1119 22:18:50.297703  229254 out.go:179] * Using the docker driver based on user configuration
	I1119 22:18:50.298918  229254 start.go:309] selected driver: docker
	I1119 22:18:50.298944  229254 start.go:930] validating driver "docker" against <nil>
	I1119 22:18:50.298960  229254 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:18:50.300683  229254 out.go:203] 
	W1119 22:18:50.302131  229254 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1119 22:18:50.303376  229254 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-904997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-836292
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-133839
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-755266
contexts:
- context:
cluster: NoKubernetes-836292
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-836292
name: NoKubernetes-836292
- context:
cluster: kubernetes-upgrade-133839
user: kubernetes-upgrade-133839
name: kubernetes-upgrade-133839
- context:
cluster: missing-upgrade-755266
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: missing-upgrade-755266
name: missing-upgrade-755266
current-context: missing-upgrade-755266
kind: Config
users:
- name: NoKubernetes-836292
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/NoKubernetes-836292/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/NoKubernetes-836292/client.key
- name: kubernetes-upgrade-133839
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key
- name: missing-upgrade-755266
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/missing-upgrade-755266/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/missing-upgrade-755266/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-904997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-904997"

                                                
                                                
----------------------- debugLogs end: false-904997 [took: 3.612528033s] --------------------------------
helpers_test.go:175: Cleaning up "false-904997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-904997
--- PASS: TestNetworkPlugins/group/false (4.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836292 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.542010305s)
--- PASS: TestNoKubernetes/serial/Start (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21918-9296/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.912219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.692341674s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-836292
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-836292: (1.283625198s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836292 --driver=docker  --container-runtime=containerd: (7.005262153s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.113902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1119 22:19:44.954782   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.270350993s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 22:20:16.170305   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.516658188s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-975700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-975700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-975700 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-975700 --alsologtostderr -v=3: (12.089676044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975700 -n old-k8s-version-975700
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975700 -n old-k8s-version-975700: exit status 7 (94.572993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-975700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-975700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (46.512193015s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975700 -n old-k8s-version-975700
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-638439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-638439 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-638439 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-638439 --alsologtostderr -v=3: (12.039303398s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-638439 -n no-preload-638439
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-638439 -n no-preload-638439: exit status 7 (81.977974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-638439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-638439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.192146242s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-638439 -n no-preload-638439
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dtnwg" [8b4ceecc-87b5-44a5-ba28-1486fba890ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003345096s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dtnwg" [8b4ceecc-87b5-44a5-ba28-1486fba890ae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003260542s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-975700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-975700 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-975700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975700 -n old-k8s-version-975700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975700 -n old-k8s-version-975700: exit status 2 (320.977512ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975700 -n old-k8s-version-975700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975700 -n old-k8s-version-975700: exit status 2 (323.805604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-975700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975700 -n old-k8s-version-975700
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975700 -n old-k8s-version-975700
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (40.665663887s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pph4x" [163535a1-7a14-42e2-bbd3-ad6e5f42f269] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004195502s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pph4x" [163535a1-7a14-42e2-bbd3-ad6e5f42f269] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003479161s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-638439 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-638439 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-638439 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-638439 -n no-preload-638439
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-638439 -n no-preload-638439: exit status 2 (327.618353ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-638439 -n no-preload-638439
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-638439 -n no-preload-638439: exit status 2 (360.269493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-638439 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-638439 -n no-preload-638439
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-638439 -n no-preload-638439
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.737769355s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (30.463244999s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-299509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-299509 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-299509 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-299509 --alsologtostderr -v=3: (12.653503235s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-982287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-982287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.466581752s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-982287 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-982287 --alsologtostderr -v=3: (1.366923724s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-982287 -n newest-cni-982287
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-982287 -n newest-cni-982287: exit status 7 (101.882668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-982287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-982287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (13.063781698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-982287 -n newest-cni-982287
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299509 -n embed-certs-299509
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299509 -n embed-certs-299509: exit status 7 (110.401229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-299509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299509 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.000340339s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299509 -n embed-certs-299509
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (43.290334477s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-982287 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-982287 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-982287 --alsologtostderr -v=1: (1.163973052s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-982287 -n newest-cni-982287
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-982287 -n newest-cni-982287: exit status 2 (420.152631ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-982287 -n newest-cni-982287
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-982287 -n newest-cni-982287: exit status 2 (451.655169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-982287 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-982287 -n newest-cni-982287
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-982287 -n newest-cni-982287
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-409240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-409240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.074477302s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-409240 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-409240 --alsologtostderr -v=3: (13.777505157s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.849291528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240: exit status 7 (99.338918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-409240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-409240 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.226034176s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-904997 "pgrep -a kubelet"
I1119 22:23:54.010202   12821 config.go:182] Loaded profile config "auto-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h72ws" [dc64a3f7-a8e7-4faf-91f0-f66234e78b42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h72ws" [dc64a3f7-a8e7-4faf-91f0-f66234e78b42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003898152s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9wz2" [fb57ffad-8cc7-4207-b85c-f2e5e01a5c95] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003200254s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9wz2" [fb57ffad-8cc7-4207-b85c-f2e5e01a5c95] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003463298s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-299509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-zx4z7" [cabe1e0b-3906-4b5d-9dc2-42dc92a2b6ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003878904s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299509 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-299509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299509 -n embed-certs-299509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299509 -n embed-certs-299509: exit status 2 (316.171129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299509 -n embed-certs-299509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299509 -n embed-certs-299509: exit status 2 (322.958217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-299509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299509 -n embed-certs-299509
I1119 22:24:08.176967   12821 config.go:182] Loaded profile config "kindnet-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299509 -n embed-certs-299509
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-904997 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hbbh6" [670ac442-6f48-4b55-a1a6-cda68908407d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hbbh6" [670ac442-6f48-4b55-a1a6-cda68908407d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003747444s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.422677639s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gzzcf" [618df8e0-d5a4-444c-8667-a8a53c1be51c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003940441s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.661197575s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gzzcf" [618df8e0-d5a4-444c-8667-a8a53c1be51c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003584921s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-409240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409240 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-409240 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240: exit status 2 (410.07834ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240: exit status 2 (464.67985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-409240 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-409240 -n default-k8s-diff-port-409240
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)
I1119 22:25:49.190119   12821 config.go:182] Loaded profile config "enable-default-cni-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m8.602886691s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1119 22:24:44.954137   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/addons-130311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.134118425s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cqmwn" [d3ce0d61-c189-41f5-b179-5a9978312c8a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-cqmwn" [d3ce0d61-c189-41f5-b179-5a9978312c8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00430451s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-904997 "pgrep -a kubelet"
I1119 22:25:11.588826   12821 config.go:182] Loaded profile config "calico-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b8lpf" [7beed5d3-437b-491a-8e70-bc1d2a1afe85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b8lpf" [7beed5d3-437b-491a-8e70-bc1d2a1afe85] Running
E1119 22:25:16.170532   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/functional-142762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004009997s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-904997 "pgrep -a kubelet"
I1119 22:25:16.855276   12821 config.go:182] Loaded profile config "custom-flannel-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s8z9m" [eb23a7b7-f66c-491c-9fd3-64303a4fea41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s8z9m" [eb23a7b7-f66c-491c-9fd3-64303a4fea41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003427138s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-904997 exec deployment/netcat -- nslookup kubernetes.default
E1119 22:25:26.148409   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/old-k8s-version-975700/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-t8gcz" [728c71e2-d456-4765-915e-4fbbba446ea4] Running
E1119 22:25:42.609399   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:42.615854   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:42.627319   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:42.648702   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:42.690163   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:42.771438   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003646258s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1119 22:25:42.933656   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:43.255824   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:25:43.897640   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-904997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m1.109568711s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-904997 "pgrep -a kubelet"
I1119 22:25:47.627370   12821 config.go:182] Loaded profile config "flannel-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-904997 replace --force -f testdata/netcat-deployment.yaml
E1119 22:25:47.744877   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m26gp" [984278a9-3005-4a5d-87d9-6fa205c2f5d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m26gp" [984278a9-3005-4a5d-87d9-6fa205c2f5d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003323126s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-904997 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wqd8h" [eb5384ad-3ae1-44b7-90e7-00ad2da19810] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wqd8h" [eb5384ad-3ae1-44b7-90e7-00ad2da19810] Running
E1119 22:25:52.870434   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/no-preload-638439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004614134s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-904997 "pgrep -a kubelet"
I1119 22:26:44.207945   12821 config.go:182] Loaded profile config "bridge-904997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-904997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zkx4k" [1ec4cbb9-2124-4cb6-ba9a-e556026a0cce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zkx4k" [1ec4cbb9-2124-4cb6-ba9a-e556026a0cce] Running
E1119 22:26:47.762046   12821 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/old-k8s-version-975700/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003214905s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-904997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-904997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-837642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-837642
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-904997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-133839
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-755266
contexts:
- context:
cluster: kubernetes-upgrade-133839
user: kubernetes-upgrade-133839
name: kubernetes-upgrade-133839
- context:
cluster: missing-upgrade-755266
user: missing-upgrade-755266
name: missing-upgrade-755266
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-133839
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key
- name: missing-upgrade-755266
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/missing-upgrade-755266/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/missing-upgrade-755266/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-904997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-904997"

                                                
                                                
----------------------- debugLogs end: kubenet-904997 [took: 3.782038883s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-904997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-904997
--- SKIP: TestNetworkPlugins/group/kubenet (4.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-904997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-904997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-836292
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9296/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-133839
contexts:
- context:
cluster: NoKubernetes-836292
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:18:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-836292
name: NoKubernetes-836292
- context:
cluster: kubernetes-upgrade-133839
user: kubernetes-upgrade-133839
name: kubernetes-upgrade-133839
current-context: ""
kind: Config
users:
- name: NoKubernetes-836292
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/NoKubernetes-836292/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/NoKubernetes-836292/client.key
- name: kubernetes-upgrade-133839
user:
client-certificate: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.crt
client-key: /home/jenkins/minikube-integration/21918-9296/.minikube/profiles/kubernetes-upgrade-133839/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-904997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-904997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-904997"

                                                
                                                
----------------------- debugLogs end: cilium-904997 [took: 5.154221124s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-904997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-904997
--- SKIP: TestNetworkPlugins/group/cilium (5.35s)

                                                
                                    
Copied to clipboard